Home
Scholarly Works
Trust Metrics for Medical Deep Learning Using...
Conference

Trust Metrics for Medical Deep Learning Using Explainable-AI Ensemble for Time Series Classification

Abstract

Trustworthiness is a roadblock in mass adoption of artificial intelligence (AI) in medicine. This research developed a framework to explore the trustworthiness as it applies to AI in medicine with respect to common stakeholders in medical device development. Within this framework the element of explainability of AI models was explored by evaluating an ensemble of explainable AI (XAI) methods. In current literature a litany of XAI methods are available that provide a variety of insights into the learning and function of AI models. XAI methods provide a human readable output for the AI’s learned processes. These XAI methods provide very subjective outputs with varying degrees of quality. Currently, there are no metrics or methods of objectively evaluating XAI generated explanations against outputs from other XAI methods or to test repeatable consistency of explanations from a single XAI method. This research presents two constituent elements, similarity and stability, to explore the concept of explainability for time series medical data (ECG). Thus providing a repeatable and testable framework to evaluate XAI methods and their generated explanations. This is accomplished using subject matter expert (SME) annotated ECG signals (time-series signals) represented as images to AI models and XAI methods. The XAI methods of Vanilla Saliency, SmoothGrad, GradCAM and GradCAM++ were used to generate XAI outputs for a VGG-16 based deep learning classification model. The framework provides insights about XAI method generated explanations for the AI and how closely that learning corresponds to SME decision making. It also objectively evaluates how closely explanations generated by any XAI method resemble outputs from other XAI methods. Lastly, the framework provides insights about possible enhancements to XAI to go beyond what was identified by the SMEs in their decision making.

Authors

Siddiqui K; Doyle TE

Volume

00

Pagination

pp. 370-377

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Publication Date

January 20, 2022

DOI

10.1109/ccece49351.2022.9918458

Name of conference

2022 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE)
View published work (Non-McMaster Users)

Contact the Experts team