Correlation Ratio for Unsupervised Learning of Multi-modal Deformable Registration
Abstract
In recent years, unsupervised learning for deformable image registration has
been a major research focus. This approach involves training a registration
network using pairs of moving and fixed images, along with a loss function that
combines an image similarity measure and deformation regularization. For
multi-modal image registration tasks, the correlation ratio has been a
widely-used image similarity measure historically, yet it has been
underexplored in current deep learning methods. Here, we propose a
differentiable correlation ratio to use as a loss function for learning-based
multi-modal deformable image registration. This approach extends the
traditionally non-differentiable implementation of the correlation ratio by
using the Parzen windowing approximation, enabling backpropagation with deep
neural networks. We validated the proposed correlation ratio on a multi-modal
neuroimaging dataset. In addition, we established a Bayesian training framework
to study how the trade-off between the deformation regularizer and similarity
measures, including mutual information and our proposed correlation ratio,
affects the registration performance.