Home
Scholarly Works
Differentially Private Federated Learning: An...
Conference

Differentially Private Federated Learning: An Information-Theoretic Perspective

Abstract

We propose a new technique for deriving the differential privacy parameters in federated learning (FL). We consider the setting where a machine learning model is iteratively trained using stochastic gradient descent (SGD) and only the last update is publicly released. In this approach, we interpret each training iteration as a Markov kernel. We then quantify the impact of the kernel on privacy parameters via the contraction coefficient of the $E_{\gamma}$-divergence that underlies differential privacy. To do so, we generalize the well-known Dobrushin's ergodicity coefficient, originally defined in terms of total variation distance, to a family of $f$-divergences. We then analyze the convergence rate of SGD under the proposed private FL framework.

Authors

Asoodeh S; Chen W-N; Calmon FP; Özgür A

Volume

00

Pagination

pp. 344-349

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Publication Date

July 20, 2021

DOI

10.1109/isit45174.2021.9518124

Name of conference

2021 IEEE International Symposium on Information Theory (ISIT)
View published work (Non-McMaster Users)

Contact the Experts team