Quantifying Deep Learning Model Uncertainty in Conformal Prediction
Abstract
Precise estimation of predictive uncertainty in deep neural networks is a
critical requirement for reliable decision-making in machine learning and
statistical modeling, particularly in the context of medical AI. Conformal
Prediction (CP) has emerged as a promising framework for representing the model
uncertainty by providing well-calibrated confidence levels for individual
predictions. However, the quantification of model uncertainty in conformal
prediction remains an active research area, yet to be fully addressed. In this
paper, we explore state-of-the-art CP methodologies and their theoretical
foundations. We propose a probabilistic approach in quantifying the model
uncertainty derived from the produced prediction sets in conformal prediction
and provide certified boundaries for the computed uncertainty. By doing so, we
allow model uncertainty measured by CP to be compared by other uncertainty
quantification methods such as Bayesian (e.g., MC-Dropout and DeepEnsemble) and
Evidential approaches.