Home
Scholarly Works
Privacy-Aware Federated Fine-Tuning of Large...
Conference

Privacy-Aware Federated Fine-Tuning of Large Pretrained Models With Just Forward Propagation

Abstract

With the extraordinary success of generative artificial intelligence, large pretrained models (LPMs) have been widely used to achieve human-level performance. Despite the one-shot capability, it is always preferred to fine-tune the LPMs for domain-specific downstream tasks. Therefore, the federated learning system is leveraged to fine-tune the large pretrained models enabling concurrrently use multiple distributed clients as well as their local datasets. While the first-order fine-tuning methods suffer from high computational and memory costs due to the backward propagation, we are motivated to propose a federated zeroth-order fine-tuning method with only forward propagation. Moreover, we also leverage differential privacy to further preserve the data privacy of local clients. Experimental results illustrate that our proposed federated zeroth-order method can reduce the memory and retain a similar testing accuracy over the state-of-the-art benchmarks.

Authors

Xing K; Dong Y; Hu X; Leung VCM; Deen MJ; Guo S

Volume

00

Pagination

pp. 1-5

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Publication Date

April 11, 2025

DOI

10.1109/icassp49660.2025.10889811

Name of conference

ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
View published work (Non-McMaster Users)

Contact the Experts team