Home
Scholarly Works
Training Fair Models in Federated Learning without...
Conference

Training Fair Models in Federated Learning without Data Privacy Infringement

Abstract

Training fair machine learning models becomes more and more important. As many powerful models are trained by collaboration among multiple parties, each holding some sensitive data, it is natural to explore the feasibility of training fair models in federated learning so that the fairness of trained models, the data privacy of clients, and the collaboration between clients can be fully respected simultaneously. However, the task of training fair models in federated learning is challenging, since it is far from trivial to estimate the fairness of a model without knowing the private data of the participating parties, which is often constrained by privacy requirements in federated learning. In this paper, we first propose a federated estimation method to accurately estimate the fairness of a model without infringing the data privacy of any party. Then, we use the fairness estimation to formulate a novel problem of training fair models in federated learning. We develop FedFair, a well-designed federated learning framework, which can successfully train a fair model with high performance without data privacy infringement. Our extensive experiments on three real-world data sets demonstrate the excellent fair model training performance of our method.

Authors

Che X; Hu J; Zhou Z; Zhang Y; Chu L

Volume

00

Pagination

pp. 7687-7696

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Publication Date

December 18, 2024

DOI

10.1109/bigdata62323.2024.10825911

Name of conference

2024 IEEE International Conference on Big Data (BigData)
View published work (Non-McMaster Users)

Contact the Experts team