Home
Scholarly Works
Body part and imaging modality classification for...
Conference

Body part and imaging modality classification for a general radiology cognitive assistant

Abstract

Decision support systems built for radiologists need to cover a fairly wide range of image types, with the ability to route each image to the relevant algorithm. Furthermore, the training of such networks requires building large datasets with significant efforts in image curation. In situations where the DICOM tag of an image is unavailable, or unreliable, a classifier that can automatically detect the body part depicted in the image, as well as the imaging modality, is necessary. Previous work has shown the use of imaging and textual features to distinguish between imaging modalities. In this work, we present a model for the simultaneous classification of body part and imaging modality, which to our knowledge has not been done before, as part of the larger work to create a cognitive assistant for radiologists. This classification network consists of 10 classes built from a VGG network architecture using transfer learning to learn generic features. An accuracy of 94.8% is achieved.

Authors

Agunwa C; Moradi M; Wong KCL; Syeda-Mahmood T

Volume

10949

Publisher

SPIE, the international society for optics and photonics

Publication Date

March 15, 2019

DOI

10.1117/12.2513074

Name of conference

Medical Imaging 2019: Image Processing

Conference proceedings

Progress in Biomedical Optics and Imaging

ISSN

1605-7422
View published work (Non-McMaster Users)

Contact the Experts team