Home
Scholarly Works
AnaXNet: Anatomy Aware Multi-label Finding...
Preprint

AnaXNet: Anatomy Aware Multi-label Finding Classification in Chest X-ray

Abstract

Radiologists usually observe anatomical regions of chest X-ray images as well as the overall image before making a decision. However, most existing deep learning models only look at the entire X-ray image for classification, failing to utilize important anatomical information. In this paper, we propose a novel multi-label chest X-ray classification model that accurately classifies the image finding and also localizes the findings to their correct anatomical regions. Specifically, our model consists of two modules, the detection module and the anatomical dependency module. The latter utilizes graph convolutional networks, which enable our model to learn not only the label dependency but also the relationship between the anatomical regions in the chest X-ray. We further utilize a method to efficiently create an adjacency matrix for the anatomical regions using the correlation of the label across the different regions. Detailed experiments and analysis of our results show the effectiveness of our method when compared to the current state-of-the-art multi-label chest X-ray image classification methods while also providing accurate location information.

Authors

Agu NN; Wu JT; Chao H; Lourentzou I; Sharma A; Moradi M; Yan P; Hendler J

Publication date

May 20, 2021

DOI

10.48550/arxiv.2105.09937

Preprint server

arXiv
View published work (Non-McMaster Users)

Contact the Experts team