Home
Scholarly Works
Testing Bayes Error Rate Estimators in Difficult...
Journal article

Testing Bayes Error Rate Estimators in Difficult Situations Using Monte Carlo Simulations

Abstract

The Bayes Error Rate (BER) is the fundamental limit on the achievable generalizable classification accuracy of any machine learning model due to inherent uncertainty within the data. BER estimators offer insight into the difficulty of any classification problem and set expectations for optimal classification performance. In order to be useful, the estimators must also be accurate with a limited number of samples on multivariate problems with unknown class distributions. To determine which estimators meet the minimum requirements for “usefulness”, an in-depth examination of their accuracy is conducted using Monte Carlo simulations with synthetic data in order to obtain their confidence bounds for binary classification. To examine the usability of the estimators for real-world applications, new non-linear multi-modal test scenarios are introduced. In each scenario, 2500 Monte Carlo simulations per scenario are run over a wide range of BER values. In a comparison of k-Nearest Neighbor (kNN), Generalized Henze-Penrose (GHP) divergence and Kernel Density Estimation (KDE) techniques, results show that kNN is overwhelmingly the more accurate non-parametric estimator. In order to reach the target of an under 5% range for the 95% confidence bounds, the minimum number of required samples per class is 1000. As more features are added, more samples are needed, so that 2500 samples per class are required at only 4 features. Other estimators do become more accurate than kNN as more features are added, but continuously fail to meet the target range.

Authors

Wheat L; Mohrenschildt MV; Habibi S

Journal

IEEE Access, Vol. 13, , pp. 165810–165829

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Publication Date

January 1, 2025

DOI

10.1109/access.2025.3609630

ISSN

2169-3536

Contact the Experts team