Three Variants of Differential Privacy: Lossless Conversion and Applications
Abstract
We consider three different variants of differential privacy (DP), namely
approximate DP, Rényi DP (RDP), and hypothesis test DP. In the first part, we
develop a machinery for optimally relating approximate DP to RDP based on the
joint range of two $f$-divergences that underlie the approximate DP and RDP. In
particular, this enables us to derive the optimal approximate DP parameters of
a mechanism that satisfies a given level of RDP. As an application, we apply
our result to the moments accountant framework for characterizing privacy
guarantees of noisy stochastic gradient descent (SGD). When compared to the
state-of-the-art, our bounds may lead to about 100 more stochastic gradient
descent iterations for training deep learning models for the same privacy
budget. In the second part, we establish a relationship between RDP and
hypothesis test DP which allows us to translate the RDP constraint into a
tradeoff between type I and type II error probabilities of a certain binary
hypothesis test. We then demonstrate that for noisy SGD our result leads to
tighter privacy guarantees compared to the recently proposed $f$-DP framework
for some range of parameters.