Home
Scholarly Works
On the Privacy Guarantees of Differentially...
Conference

On the Privacy Guarantees of Differentially Private Stochastic Gradient Descent

Abstract

Differentially Private Stochastic Gradient Descent (DP-SGD) is a widely adopted algorithm for privately training machine learning models. An inherent feature of this algorithm is the incorporation of gradient clipping to counteract the influence of individual samples during training. Nevertheless, the introduction of gradient clipping also introduces non-convexity into the problem, rendering it challenging to derive upper bounds on the privacy loss. In this paper, we establish effective upper bounds for the privacy loss of both projected DP-SGD and regularized DP-SGD, without relying on convexity or smoothness assumptions regarding the loss function. Our approach involves a direct analysis of the hockey-stick divergence between coupled stochastic processes through the application of nonlinear data processing inequalities.

Authors

Asoodeh S; Diaz M

Volume

00

Pagination

pp. 380-385

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Publication Date

July 12, 2024

DOI

10.1109/isit57864.2024.10619520

Name of conference

2024 IEEE International Symposium on Information Theory (ISIT)
View published work (Non-McMaster Users)

Contact the Experts team