Home
Scholarly Works
Sample-Efficient Private Learning of Mixtures of...
Conference

Sample-Efficient Private Learning of Mixtures of Gaussians

Abstract

We study the problem of learning mixtures of Gaussians with approximate differential privacy. We prove that roughly kd2 + k1.5d1.75 + k2d samples suffice to learn a mixture of k arbitrary d-dimensional Gaussians up to low total variation distance, with differential privacy. Our work improves over the previous best result [AAL24b] (which required roughly k2d4 samples) and is provably optimal when d is much larger than k2. Moreover, we give the first optimal bound for privately learning mixtures of k univariate (i.e., 1-dimensional) Gaussians. Importantly, we show that the sample complexity for learning mixtures of univariate Gaussians is linear in the number of components k, whereas the previous best sample complexity [AAL21] was quadratic in k. Our algorithms utilize various techniques, including the inverse sensitivity mechanism [AD20b, AD20a, HKMN23], sample compression for distributions [ABDH+20], and methods for bounding volumes of sumsets.

Authors

Ashtiani H; Majid M; Narayanan S

Volume

37

Publication Date

January 1, 2024

Conference proceedings

Advances in Neural Information Processing Systems

ISSN

1049-5258

Labels

Fields of Research (FoR)

Contact the Experts team