Most L p -type universal approximation theorems guarantee that a given machine learning model class ℱ ⊆ C ( R d , R D ) is dense in L μ p ( R d , R D ) for any suitable finite Borel measure μ on R d . Unfortunately, this means that the model’s approximation quality can rapidly degenerate outside some compact subset of R d , as any such measure is largely concentrated on some bounded subset of R d . This paper proposes a generic solution to this approximation theoretic problem by introducing a canonical transformation which ”upgrades ℱ ’s approximation property” in the following sense. The transformed model class, denoted by ℱ -tope , is shown to be dense in L μ , strict p ( R d , R D ) which is a topological space whose elements are locally p -integrable functions and whose topology is much finer than usual norm topology on L μ p ( R d , R D ) ; here μ is any suitable σ -finite Borel measure μ on R d . Next, we show that if ℱ is any family of analytic functions then there is always a strict ”gap” between ℱ -tope ’s expressibility and that of ℱ , since we find that ℱ can never dense in L μ , strict p ( R d , R D ) . In the general case, where ℱ may contain non-analytic functions, we provide an abstract form of these results guaranteeing that there always exists some function space in which ℱ -tope is dense but ℱ is not, while, the converse is never possible. Applications to feedforward networks, convolutional neural networks, and polynomial bases are explored.