Home
Scholarly Works
Channel Scaling: A Scale-And-Select Approach For...
Conference

Channel Scaling: A Scale-And-Select Approach For Transfer Learning

Abstract

Transfer learning with pre-trained neural networks is a common strategy for training classifiers in medical image analysis. Without proper channel selections, this often results in unnecessarily large models that hinder deployment and explainability. In this paper, we propose a novel approach to efficiently build small and well performing networks by introducing the channel-scaling layers. A channel-scaling layer is attached to each frozen convolutional layer, with the trainable scaling weights inferring the importance of the corresponding feature channels. Unlike the fine-tuning approaches, we maintain the weights of the original channels and large datasets are not required. By imposing L1 regularization and thresholding on the scaling weights, this framework iteratively removes unnecessary feature channels from a pre-trained model. Using an ImageNet pre-trained VGG16 model, we demonstrate the capabilities of the proposed framework on classifying opacity from chest X-ray images. The results show that we can reduce the number of parameters by 95% while delivering a superior performance.

Authors

Wong KCL; Kashyap S; Moradi M

Volume

00

Pagination

pp. 807-811

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Publication Date

April 16, 2021

DOI

10.1109/isbi48211.2021.9433872

Name of conference

2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)
View published work (Non-McMaster Users)

Contact the Experts team