Home
Scholarly Works
Mixture of Experts Softens the Curse of...
Preprint

Mixture of Experts Softens the Curse of Dimensionality in Operator Learning

Abstract

We study the approximation-theoretic implications of mixture-of-experts architectures for operator learning, where the complexity of a single large neural operator is distributed across many small neural operators (NOs), and each input is routed to exactly one NO via a decision tree. We analyze how this tree-based routing and expert decomposition affect approximation power, sample complexity, and stability. Our main result is a distributed universal approximation theorem for mixture of neural operators (MoNOs): any Lipschitz nonlinear operator between $L^2([0,1]^d)$ spaces can be uniformly approximated over the Sobolev unit ball to arbitrary accuracy $\varepsilon>0$ by an MoNO, where each expert NO has a depth, width, and rank scaling as $\mathcal{O}(\varepsilon^{-1})$. Although the number of experts may grow with accuracy, each NO remains small, enough to fit within active memory of standard hardware for reasonable accuracy levels. Our analysis also yields new quantitative approximation rates for classical NOs approximating uniformly continuous nonlinear operators uniformly on compact subsets of $L^2([0,1]^d)$.

Authors

Kratsios A; Furuya T; Benitez JAL; Lassas M; de Hoop M

Publication date

December 1, 2025

DOI

10.48550/arxiv.2404.09101

Preprint server

arXiv
View published work (Non-McMaster Users)

Contact the Experts team