Home
Scholarly Works
Sample-Optimal Locally Private Hypothesis...
Conference

Sample-Optimal Locally Private Hypothesis Selection and the Provable Benefits of Interactivity

Abstract

We study the problem of hypothesis selection under the constraint of local differential privacy. Given a class F of k distributions and a set of i.i.d. samples from an unknown distribution h, the goal of hypothesis selection is to pick a distribution (Equation presented) whose total variation distance to h is comparable with the best distribution in F (with high probability). We devise an ε-locally-differentially-private (ε-LDP) algorithm that uses Θ (k/α2min {ε2,1}) samples to guarantee that dTV (h, (Equation presented)) ≤ α + 9 minf∈F dTV (h, f) with high probability. This sample complexity is optimal for ε < 1, matching the lower bound of Gopi et al. (2020). All previously known algorithms for this problem required Ω (k log k/α2 min {ε2,1}) samples to work. Moreover, our result demonstrates the power of interaction for ε-LDP hypothesis selection. Namely, it breaks the known lower bound of Ω (k log k/α2ε2) for the sample complexity of non-interactive hypothesis selection. Our algorithm achieves this using only Θ(log log k) rounds of interaction. To prove our results, we define the notion of critical queries for a Statistical Query Algorithm (SQA) which may be of independent interest. Informally, an SQA is said to use a small number of critical queries if its success relies on the accuracy of only a small number of queries it asks. We then design an LDP algorithm that uses a smaller number of critical queries.

Authors

Pour AF; Ashtiani H; Asoodeh S

Volume

247

Pagination

pp. 4240-4275

Publication Date

January 1, 2024

Conference proceedings

Proceedings of Machine Learning Research

Contact the Experts team