Building medical image classifiers with very limited data using segmentation networks
Abstract
Deep learning has shown promising results in medical image analysis, however,
the lack of very large annotated datasets confines its full potential. Although
transfer learning with ImageNet pre-trained classification models can alleviate
the problem, constrained image sizes and model complexities can lead to
unnecessary increase in computational cost and decrease in performance. As many
common morphological features are usually shared by different classification
tasks of an organ, it is greatly beneficial if we can extract such features to
improve classification with limited samples. Therefore, inspired by the idea of
curriculum learning, we propose a strategy for building medical image
classifiers using features from segmentation networks. By using a segmentation
network pre-trained on similar data as the classification task, the machine can
first learn the simpler shape and structural concepts before tackling the
actual classification problem which usually involves more complicated concepts.
Using our proposed framework on a 3D three-class brain tumor type
classification problem, we achieved 82% accuracy on 191 testing samples with 91
training samples. When applying to a 2D nine-class cardiac semantic level
classification problem, we achieved 86% accuracy on 263 testing samples with
108 training samples. Comparisons with ImageNet pre-trained classifiers and
classifiers trained from scratch are presented.