Learning fuzzy clustering for SPECT/CT segmentation via convolutional neural networks Journal Articles uri icon

  •  
  • Overview
  •  
  • Research
  •  
  • Identity
  •  
  • Additional Document Info
  •  
  • View All
  •  

abstract

  • PurposeQuantitative bone single‐photon emission computed tomography (QBSPECT) has the potential to provide a better quantitative assessment of bone metastasis than planar bone scintigraphy due to its ability to better quantify activity in overlapping structures. An important element of assessing the response of bone metastasis is accurate image segmentation. However, limited by the properties of QBSPECT images, the segmentation of anatomical regions‐of‐interests (ROIs) still relies heavily on the manual delineation by experts. This work proposes a fast and robust automated segmentation method for partitioning a QBSPECT image into lesion, bone, and background.MethodsWe present a new unsupervised segmentation loss function and its semi‐ and supervised variants for training a convolutional neural network (ConvNet). The loss functions were developed based on the objective function of the classical Fuzzy C‐means (FCM) algorithm. The first proposed loss function can be computed within the input image itself without any ground truth labels, and is thus unsupervised; the proposed supervised loss function follows the traditional paradigm of the deep learning‐based segmentation methods and leverages ground truth labels during training. The last loss function is a combination of the first and the second and includes a weighting parameter, which enables semi‐supervised segmentation using deep learning neural network.Experiments and resultsWe conducted a comprehensive study to compare our proposed methods with ConvNets trained using supervised, cross‐entropy and Dice loss functions, and conventional clustering methods. The Dice similarity coefficient (DSC) and several other metrics were used as figures of merit as applied to the task of delineating lesion and bone in both simulated and clinical SPECT/CT images. We experimentally demonstrated that the proposed methods yielded good segmentation results on a clinical dataset even though the training was done using realistic simulated images. On simulated SPECT/CT, the proposed unsupervised model’s accuracy was greater than the conventional clustering methods while reducing computation time by 200‐fold. For the clinical QBSPECT/CT, the proposed semi‐supervised ConvNet model, trained using simulated images, produced DSCs of and for lesion and bone segmentation in SPECT, and a DSC of bone segmentation of CT images. These DSCs were larger than that for standard segmentation loss functions by for SPECT segmentation, and for CT segmentation with P‐values from a paired t‐test.ConclusionsA ConvNet‐based image segmentation method that uses novel loss functions was developed and evaluated. The method can operate in unsupervised, semi‐supervised, or fully‐supervised modes depending on the availability of annotated training data. The results demonstrated that the proposed method provides fast and robust lesion and bone segmentation for QBSPECT/CT. The method can potentially be applied to other medical image segmentation applications.

authors

  • Chen, Junyu
  • Li, Ye
  • Luna, Licia P
  • Chung, Hyun W
  • Rowe, Steven P
  • Du, Yong
  • Solnes, Lilja B
  • Frey, Eric C

publication date

  • July 2021