General convex relaxations of implicit functions and inverse functions
Journal Articles
Overview
Research
Identity
Additional Document Info
View All
Overview
abstract
Convex relaxations of nonconvex functions provide useful bounding information in applications such as deterministic global optimization and reachability analysis. In some situations, the original nonconvex functions may not be known explicitly, but are instead described implicitly by nonlinear equation systems. In these cases, established convex relaxation methods for closed-form functions are not directly applicable. This article presents a new general strategy to construct convex relaxations for such implicit functions. These relaxations are described as convex parametric programs whose constraints are convex relaxations of the original residual function. This relaxation strategy is straightforward to implement, produces tight relaxations in practice, is particularly efficient to carry out when monotonicity properties can be exploited, and does not assume the existence or uniqueness of an implicit function on the entire intended domain. Unlike all previous approaches to the authors’ knowledge, this new approach permits any relaxations of the residual function; it does not require the residual relaxations to be factorable or to be obtained from a McCormick-like traversal of a computational graph. This new convex relaxation strategy is extended to inverse functions, compositions involving implicit functions, feasible-set mappings in constraint satisfaction problems, and solutions of parametric ODEs. Based on a proof-of-concept implementation in Julia, numerical examples are presented to illustrate the convex relaxations produced for various implicit functions and optimal-value functions.