Home
Scholarly Works
UNIVERSAL APPROXIMATION UNDER CONSTRAINTS IS...
Conference

UNIVERSAL APPROXIMATION UNDER CONSTRAINTS IS POSSIBLE WITH TRANSFORMERS

Abstract

Many practical problems need the output of a machine learning model to satisfy a set of constraints, K. There are, however, no known guarantees that classical neural networks can exactly encode constraints while simultaneously achieving universality. We provide a quantitative constrained universal approximation theorem which guarantees that for any convex or non-convex compact set K and any continuous function f : Rn ! K, there is a probabilistic transformer F whose randomized outputs all lie in K and whose expected output uniformly approximates f. Our second main result is a “deep neural version” of Berge (1963)'s Maximum Theorem. The result guarantees that given an objective function L, a constraint set K, and a family of soft constraint sets, there is a probabilistic transformer F that approximately minimizes L and whose outputs belong to K; moreover, F approximately satisfies the soft constraints. Our results imply the first universal approximation theorem for classical transformers with exact convex constraint satisfaction, and a chart-free universal approximation theorem for Riemannian manifold-valued functions subject to geodesically-convex constraints.

Authors

Kratsios A; Liu T; Dokmanić I; Zamanlooy B

Publication Date

January 1, 2022

Conference proceedings

Iclr 2022 10th International Conference on Learning Representations

Contact the Experts team