Designing universal causal deep learning models: The geometric (Hyper)transformer Journal Articles uri icon

  •  
  • Overview
  •  
  • Research
  •  
  • Identity
  •  
  • Additional Document Info
  •  
  • View All
  •  

abstract

  • AbstractSeveral problems in stochastic analysis are defined through their geometry, and preserving that geometric structure is essential to generating meaningful predictions. Nevertheless, how to design principled deep learning (DL) models capable of encoding these geometric structures remains largely unknown. We address this open problem by introducing a universal causal geometric DL framework in which the user specifies a suitable pair of metric spaces and and our framework returns a DL model capable of causally approximating any “regular” map sending time series in to time series in while respecting their forward flow of information throughout time. Suitable geometries on include various (adapted) Wasserstein spaces arising in optimal stopping problems, a variety of statistical manifolds describing the conditional distribution of continuous‐time finite state Markov chains, and all Fréchet spaces admitting a Schauder basis, for example, as in classical finance. Suitable spaces are compact subsets of any Euclidean space. Our results all quantitatively express the number of parameters needed for our DL model to achieve a given approximation error as a function of the target map's regularity and the geometric structure both of and of . Even when omitting any temporal structure, our universal approximation theorems are the first guarantees that Hölder functions, defined between such and can be approximated by DL models.

publication date

  • April 2024