Home
Scholarly Works
Chest ImaGenome Dataset for Clinical Reasoning
Conference

Chest ImaGenome Dataset for Clinical Reasoning

Abstract

Despite the progress in automatic detection of radiologic findings from Chest X-Ray (CXR) images in recent years, a quantitative evaluation of the explainability of these models is hampered by the lack of locally labeled datasets for different findings. With the exception of a few expert-labeled small-scale datasets for specific findings, such as pneumonia and pneumothorax, most of the CXR deep learning models to date are trained on global “weak” labels extracted from text reports, or trained via a joint image and unstructured text learning strategy. Inspired by the Visual Genome effort in the computer vision community, we constructed the first Chest ImaGenome dataset with a scene graph data structure to describe 242, 072 images. Local annotations are automatically produced using a joint rule-based natural language processing (NLP) and atlas-based bounding box detection pipeline. Through a radiologist constructed CXR ontology, the annotations for each CXR are connected as an anatomy-centered scene graph, useful for image-level reasoning and multimodal fusion applications. Overall, we provide: i) 1, 256 combinations of relation annotations between 29 CXR anatomical locations (objects with bounding box coordinates) and their attributes, structured as a scene graph per image, ii) over 670, 000 localized comparison relations (for improved, worsened, or no change) between the anatomical locations across sequential exams, as well as ii) a manually annotated gold standard scene graph dataset from 500 unique patients.

Authors

Wu JT; Agu NN; Lourentzou I; Sharma A; Paguio JA; Yao JS; Dee EC; Mitchell W; Kashyap S; Giovannini A

Publication Date

January 1, 2021

Conference proceedings

Advances in Neural Information Processing Systems

ISSN

1049-5258

Labels

Fields of Research (FoR)

Contact the Experts team