Home
Scholarly Works
Pattern classification as an ill-posed, inverse...
Conference

Pattern classification as an ill-posed, inverse problem: a regularization approach

Abstract

Pattern classification can be viewed as an ill-posed, inverse problem to which the method of regularization can be applied. In doing so, a proper theoretical framework is provided for the application of radial basis function (RBF) networks to pattern classification, with strong links to the classical kernel regression estimator (KRE)-based classifiers that estimate the underlying posterior class densities. Assuming that the training patterns are labeled with binary-valued vectors indicating their class membership, a regularized solution can be designed so that each resultant network output (one for each class) can be interpreted as a nonparametric estimator of the corresponding posterior, i.e., conditional, class distribution. These RBFs generalize the classical KREs, e.g., the Parzen window estimators (PWEs), which can therefore be recovered as a particular limiting case. The authors describe analytically how constraining the classifier network coefficients to be positive during their solution alters the nature of the original regularization problem, and demonstrate experimentally the beneficial effect that such a constraint has on classifier complexity.<>

Authors

Yee P; Haykin S

Volume

1

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Publication Date

January 1, 1993

DOI

10.1109/icassp.1993.319189

Name of conference

IEEE International Conference on Acoustics Speech and Signal Processing

Conference proceedings

2013 IEEE International Conference on Acoustics, Speech and Signal Processing

ISSN

1520-6149
View published work (Non-McMaster Users)

Contact the Experts team