Home
Scholarly Works
Quantifying Deep Learning Model Uncertainty in...
Conference

Quantifying Deep Learning Model Uncertainty in Conformal Prediction

Abstract

Precise estimation of predictive uncertainty in deep neural networks is a critical requirement for reliable decision-making in machine learning and statistical modeling, particularly in the context of medical AI. Conformal Prediction (CP) has emerged as a promising framework for representing the model uncertainty by providing well-calibrated confidence levels for individual predictions. However, the quantification of model uncertainty in conformal prediction remains an active research area, yet to be fully addressed. In this paper, we explore state-of-the-art CP methodologies and their theoretical foundations. We propose a probabilistic approach in quantifying the model uncertainty derived from the produced prediction sets in conformal prediction and provide certified boundaries for the computed uncertainty. By doing so, we allow model uncertainty measured by CP to be compared by other uncertainty quantification methods such as Bayesian (e.g., MC-Dropout and DeepEnsemble) and Evidential approaches.

Authors

Karimi H; Samavi R

Volume

1

Pagination

pp. 142-148

Publisher

Association for the Advancement of Artificial Intelligence (AAAI)

Publication Date

October 3, 2023

DOI

10.1609/aaaiss.v1i1.27492

Conference proceedings

Proceedings of the AAAI Symposium Series

Issue

1

ISSN

2994-4317
View published work (Non-McMaster Users)

Contact the Experts team