Most $L^p$-type universal approximation theorems guarantee that a given
machine learning model class $\mathscr{F}\subseteq
C(\mathbb{R}^d,\mathbb{R}^D)$ is dense in
$L^p_{\mu}(\mathbb{R}^d,\mathbb{R}^D)$ for any suitable finite Borel measure
$\mu$ on $\mathbb{R}^d$. Unfortunately, this means that the model's
approximation quality can rapidly degenerate outside some compact subset of
$\mathbb{R}^d$, as any such measure is largely concentrated on some bounded
subset of $\mathbb{R}^d$. This paper proposes a generic solution to this
approximation theoretic problem by introducing a canonical transformation which
"upgrades $\mathscr{F}$'s approximation property" in the following sense. The
transformed model class, denoted by $\mathscr{F}\text{-tope}$, is shown to be
dense in $L^p_{\mu,\text{strict}}(\mathbb{R}^d,\mathbb{R}^D)$ which is a
topological space whose elements are locally $p$-integrable functions and whose
topology is much finer than usual norm topology on
$L^p_{\mu}(\mathbb{R}^d,\mathbb{R}^D)$; here $\mu$ is any suitable
$\sigma$-finite Borel measure $\mu$ on $\mathbb{R}^d$. Next, we show that if
$\mathscr{F}$ is any family of analytic functions then there is always a strict
"gap" between $\mathscr{F}\text{-tope}$'s expressibility and that of
$\mathscr{F}$, since we find that $\mathscr{F}$ can never dense in
$L^p_{\mu,\text{strict}}(\mathbb{R}^d,\mathbb{R}^D)$. In the general case,
where $\mathscr{F}$ may contain non-analytic functions, we provide an abstract
form of these results guaranteeing that there always exists some function space
in which $\mathscr{F}\text{-tope}$ is dense but $\mathscr{F}$ is not, while,
the converse is never possible. Applications to feedforward networks,
convolutional neural networks, and polynomial bases are explored.