Image-to-image translation has gained popularity in the medical field to
transform images from one domain to another. Medical image synthesis via domain
transformation is advantageous in its ability to augment an image dataset where
images for a given class is limited. From the learning perspective, this
process contributes to data-oriented robustness of the model by inherently
broadening the model's exposure to more diverse visual data and enabling it to
learn more generalized features. In the case of generating additional
neuroimages, it is advantageous to obtain unidentifiable medical data and
augment smaller annotated datasets. This study proposes the development of a
CycleGAN model for translating neuroimages from one field strength to another
(e.g., 3 Tesla to 1.5). This model was compared to a model based on DCGAN
architecture. CycleGAN was able to generate the synthetic and reconstructed
images with reasonable accuracy. The mapping function from the source (3 Tesla)
to target domain (1.5 Tesla) performed optimally with an average PSNR value of
25.69 $\pm$ 2.49 dB and an MAE value of 2106.27 $\pm$ 1218.37.