Simultaneously Solving FBSDEs and their Associated Semilinear Elliptic PDEs with Small Neural Operators
Abstract
Forward-backwards stochastic differential equations (FBSDEs) play an
important role in optimal control, game theory, economics, mathematical
finance, and in reinforcement learning. Unfortunately, the available FBSDE
solvers operate on \textit{individual} FBSDEs, meaning that they cannot provide
a computationally feasible strategy for solving large families of FBSDEs, as
these solvers must be re-run several times. \textit{Neural operators} (NOs)
offer an alternative approach for \textit{simultaneously solving} large
families of decoupled FBSDEs by directly approximating the solution operator
mapping \textit{inputs:} terminal conditions and dynamics of the backwards
process to \textit{outputs:} solutions to the associated FBSDE. Though
universal approximation theorems (UATs) guarantee the existence of such NOs,
these NOs are unrealistically large. Upon making only a few simple
theoretically-guided tweaks to the standard convolutional NO build, we confirm
that ``small'' NOs can uniformly approximate the solution operator to
structured families of FBSDEs with random terminal time, uniformly on suitable
compact sets determined by Sobolev norms using a logarithmic depth, a constant
width, and a polynomial rank in the reciprocal approximation error.
This result is rooted in our second result, and main contribution to the NOs
for PDE literature, showing that our convolutional NOs of similar depth and
width but grow only \textit{quadratically} (at a dimension-free rate) when
uniformly approximating the solution operator of the associated class of
semilinear Elliptic PDEs to these families of FBSDEs. A key insight into how
NOs work we uncover is that the convolutional layers of our NO can
approximately implement the fixed point iteration used to prove the existence
of a unique solution to these semilinear Elliptic PDEs.