Home
Scholarly Works
A Cross-Modality Neural Network Transform for...
Chapter

A Cross-Modality Neural Network Transform for Semi-automatic Medical Image Annotation

Abstract

There is a pressing need in the medical imaging community to build large scale datasets that are annotated with semantic descriptors. Given the cost of expert produced annotations, we propose an automatic methodology to produce semantic descriptors for images. These can then be used as weakly labeled instances or reviewed and corrected by clinicians. Our solution is in the form of a neural network that maps a given image to a new space formed by a large number of text paragraphs written about similar, but different images, by a human expert. We then extract semantic descriptors from the text paragraphs closest to the output of the transform network to describe the input image. We used deep learning to learn mappings between images/texts and their corresponding fixed size spaces, but a shallow network as the transform between the image and text spaces. This limits the complexity of the transform model and reduces the amount of data, in the form of image and text pairs, needed for training it. We report promising results for the proposed model in automatic descriptor generation in the case of Doppler images of cardiac valves and show that the system catches up to 91 % of the disease instances and 77 % of disease severity modifiers.

Authors

Moradi M; Guo Y; Gur Y; Negahdar M; Syeda-Mahmood T

Book title

Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016

Series

Lecture Notes in Computer Science

Volume

9901

Pagination

pp. 300-307

Publisher

Springer Nature

Publication Date

January 1, 2016

DOI

10.1007/978-3-319-46723-8_35
View published work (Non-McMaster Users)

Contact the Experts team