Home
Scholarly Works
Generative Adversarial Networks for Neuroimage...
Journal article

Generative Adversarial Networks for Neuroimage Translation

Abstract

Image-to-image translation has gained popularity in the medical field to transform images from one domain to another. Medical image synthesis via domain transformation is advantageous in its ability to augment an image dataset where images for a given class are limited. From the learning perspective, this process contributes to the data-oriented robustness of the model by inherently broadening the model's exposure to more diverse visual data and enabling it to learn more generalized features. In the case of generating additional neuroimages, it is advantageous to obtain unidentifiable medical data and augment smaller annotated datasets. This study proposes the development of a cycle-consistent generative adversarial network (CycleGAN) model for translating neuroimages from one field strength to another (e.g., 3 Tesla [T] to 1.5 T). This model was compared with a model based on a deep convolutional GAN model architecture. CycleGAN was able to generate the synthetic and reconstructed images with reasonable accuracy. The mapping function from the source (3 T) to the target domain (1.5 T) performed optimally with an average peak signal-to-noise ratio value of 25.69 ± 2.49 dB and a mean absolute error value of 2106.27 ± 1218.37. The codes for this study have been made publicly available in the following GitHub repository.a.

Authors

Czobit C; Samavi R

Journal

Journal of Computational Biology, Vol. 32, No. 6, pp. 573–583

Publisher

SAGE Publications

Publication Date

June 1, 2025

DOI

10.1089/cmb.2024.0635

ISSN

1066-5277
View published work (Non-McMaster Users)

Contact the Experts team