Home
Scholarly Works
A simple method for combining estimates to improve...
Journal article

A simple method for combining estimates to improve the overall error rates in classification

Abstract

We present a new and easy-to-implement procedure for combining J≥2$$J\ge 2$$ different classifiers in order to develop more effective classification rules. The method works by finding nonparametric estimates of the class conditional expectation of a new observation (that has to be classified), conditional on the vector of J$$J$$ predicted values corresponding to the J$$J$$ individual classifiers. Here, we propose a data-splitting method to carry out the estimation of various class conditional expectations. It turns out that, under rather minimal assumptions, the proposed combined classifier is optimal in the sense that its overall misclassification error rate is asymptotically less than (or equal to) that of any one of the individual classifiers. Simulation studies are also carried out to evaluate the proposed method. Furthermore, to make the numerical results more challenging, we also consider stable distributions (Cauchy) with rather high dimensions.

Authors

Balakrishnan N; Mojirsheibani M

Journal

Computational Statistics, Vol. 30, No. 4, pp. 1033–1049

Publisher

Springer Nature

Publication Date

December 1, 2015

DOI

10.1007/s00180-015-0571-0

ISSN

0943-4062

Contact the Experts team