Home
Scholarly Works
Inverse Entropic Optimal Transport Solves...
Preprint

Inverse Entropic Optimal Transport Solves Semi-supervised Learning via Data Likelihood Maximization

Abstract

Learning conditional distributions $π^*(\cdot|x)$ is a central problem in machine learning, which is typically approached via supervised methods with paired data $(x,y) \sim π^*$. However, acquiring paired data samples is often challenging, especially in problems such as domain translation. This necessitates the development of $\textit{semi-supervised}$ models that utilize both limited paired data and additional unpaired i.i.d. samples $x \sim π^*_x$ and $y \sim π^*_y$ from the marginal distributions. The usage of such combined data is complex and often relies on heuristic approaches. To tackle this issue, we propose a new learning paradigm that integrates both paired and unpaired data $\textbf{seamlessly}$ using the data likelihood maximization techniques. We demonstrate that our approach also connects intriguingly with inverse entropic optimal transport (OT). This finding allows us to apply recent advances in computational OT to establish an $\textbf{end-to-end}$ learning algorithm to get $π^*(\cdot|x)$. In addition, we derive the universal approximation property, demonstrating that our approach can theoretically recover true conditional distributions with arbitrarily small error. Furthermore, we demonstrate through empirical tests that our method effectively learns conditional distributions using paired and unpaired data simultaneously.

Authors

Persiianov M; Asadulaev A; Andreev N; Starodubcev N; Baranchuk D; Kratsios A; Burnaev E; Korotin A

Publication date

November 5, 2025

DOI

10.48550/arxiv.2410.02628

Preprint server

arXiv
View published work (Non-McMaster Users)

Contact the Experts team