abstract
- Mixture models are becoming a popular tool for the clustering and classification of high-dimensional data. In such high dimensional applications, model selection is problematic. The Bayesian information criterion, which is popular in lower dimensional applications, tends to underestimate the true number of components in high dimensions. We introduce an adaptive LASSO-penalized BIC (ALPBIC) to mitigate this problem. This efficacy of the ALPBIC is illustrated via applications of parsimonious mixtures of factor analyzers. The selection of the best model by ALPBIC is shown to be consistent with increasing numbers of observations based on simulated and real data analyses.