3D Segmentation with Fully Trainable Gabor Kernels and Pearson's Correlation Coefficient
Abstract
The convolutional layer and loss function are two fundamental components in
deep learning. Because of the success of conventional deep learning kernels,
the less versatile Gabor kernels become less popular despite the fact that they
can provide abundant features at different frequencies, orientations, and
scales with much fewer parameters. For existing loss functions for multi-class
image segmentation, there is usually a tradeoff among accuracy, robustness to
hyperparameters, and manual weight selections for combining different losses.
Therefore, to gain the benefits of using Gabor kernels while keeping the
advantage of automatic feature generation in deep learning, we propose a fully
trainable Gabor-based convolutional layer where all Gabor parameters are
trainable through backpropagation. Furthermore, we propose a loss function
based on the Pearson's correlation coefficient, which is accurate, robust to
learning rates, and does not require manual weight selections. Experiments on
43 3D brain magnetic resonance images with 19 anatomical structures show that,
using the proposed loss function with a proper combination of conventional and
Gabor-based kernels, we can train a network with only 1.6 million parameters to
achieve an average Dice coefficient of 83%. This size is 44 times smaller than
the original V-Net which has 71 million parameters. This paper demonstrates the
potentials of using learnable parametric kernels in deep learning for 3D
segmentation.