Online Federation For Mixtures of Proprietary Agents with Black-Box Encoders
Abstract
Most industry-standard generative AIs and feature encoders are proprietary,
offering only black-box access: their outputs are observable, but their
internal parameters and architectures remain hidden from the end-user. This
black-box access is especially limiting when constructing mixture-of-expert
type ensemble models since the user cannot optimize each proprietary AI's
internal parameters. Our problem naturally lends itself to a non-competitive
game-theoretic lens where each proprietary AI (agent) is inherently competing
against the other AI agents, with this competition arising naturally due to
their obliviousness of the AI's to their internal structure. In contrast, the
user acts as a central planner trying to synchronize the ensemble of competing
AIs.
We show the existence of the unique Nash equilibrium in the online setting,
which we even compute in closed-form by eliciting a feedback mechanism between
any given time series and the sequence generated by each (proprietary) AI
agent. Our solution is implemented as a decentralized, federated-learning
algorithm in which each agent optimizes their structure locally on their
machine without ever releasing any internal structure to the others. We obtain
refined expressions for pre-trained models such as transformers, random feature
models, and echo-state networks. Our ``proprietary federated learning''
algorithm is implemented on a range of real-world and synthetic time-series
benchmarks. It achieves orders-of-magnitude improvements in predictive accuracy
over natural benchmarks, of which there are surprisingly few due to this
natural problem still being largely unexplored.