Parsimonious mixture‐of‐experts based on mean mixture of multivariate normal distributions Journal Articles uri icon

  •  
  • Overview
  •  
  • Research
  •  
  • Identity
  •  
  • Additional Document Info
  •  
  • View All
  •  

abstract

  • The mixture‐of‐experts (MoE) paradigm attempts to learn complex models by combining several “experts” via probabilistic mixture models. Each expert in the MoE model handles a small area of the data space in which a gating function controls the data‐to‐expert assignment. The MoE framework has been used extensively in designing non‐linear models in machine learning and statistics to model the heterogeneity in data for the purpose of regression, classification and clustering. The existing MoE of multi‐target regression (MoE‐MTR) models for continuous data is based on multivariate normal distributions. However, in many practical situations, for a set of data, a group or groups of observations may exhibit asymmetric and heavy‐tailed behaviour, and inference based on symmetric distributions in such situations can unduly affect the fit of the regression model. We introduce here a novel robust multivariate non‐normal MoE model by the use of mean mixture of normal distributions. The proposed model can handle the issues of MoE‐MTR models regarding possibly skewed, heavy‐tailed and noisy data. Maximum likelihood estimates of model parameters are developed based on an expectation‐maximization (EM)‐type algorithm. Parsimony is also obtained by imposing suitable constraints on the expert dispersion matrices. The usefulness of the proposed methodology is illustrated using simulated and real data sets.

publication date

  • December 2022

published in