Neural Snowflakes: Universal Latent Graph Inference via Trainable Latent Geometries
Abstract
The inductive bias of a graph neural network (GNN) is largely encoded in its
specified graph. Latent graph inference relies on latent geometric
representations to dynamically rewire or infer a GNN's graph to maximize the
GNN's predictive downstream performance, but it lacks solid theoretical
foundations in terms of embedding-based representation guarantees. This paper
addresses this issue by introducing a trainable deep learning architecture,
coined neural snowflake, that can adaptively implement fractal-like metrics on
$\mathbb{R}^d$. We prove that any given finite weights graph can be
isometrically embedded by a standard MLP encoder. Furthermore, when the latent
graph can be represented in the feature space of a sufficiently regular kernel,
we show that the combined neural snowflake and MLP encoder do not succumb to
the curse of dimensionality by using only a low-degree polynomial number of
parameters in the number of nodes. This implementation enables a
low-dimensional isometric embedding of the latent graph. We conduct synthetic
experiments to demonstrate the superior metric learning capabilities of neural
snowflakes when compared to more familiar spaces like Euclidean space.
Additionally, we carry out latent graph inference experiments on graph
benchmarks. Consistently, the neural snowflake model achieves predictive
performance that either matches or surpasses that of the state-of-the-art
latent graph inference models. Importantly, this performance improvement is
achieved without requiring random search for optimal latent geometry. Instead,
the neural snowflake model achieves this enhancement in a differentiable
manner.