Training Fair Models in Federated Learning without Data Privacy
Infringement
Journal Articles
Overview
Research
View All
Overview
abstract
Training fair machine learning models becomes more and more important. As
many powerful models are trained by collaboration among multiple parties, each
holding some sensitive data, it is natural to explore the feasibility of
training fair models in federated learning so that the fairness of trained
models, the data privacy of clients, and the collaboration between clients can
be fully respected simultaneously. However, the task of training fair models in
federated learning is challenging, since it is far from trivial to estimate the
fairness of a model without knowing the private data of the participating
parties, which is often constrained by privacy requirements in federated
learning. In this paper, we first propose a federated estimation method to
accurately estimate the fairness of a model without infringing the data privacy
of any party. Then, we use the fairness estimation to formulate a novel problem
of training fair models in federated learning. We develop FedFair, a
well-designed federated learning framework, which can successfully train a fair
model with high performance without data privacy infringement. Our extensive
experiments on three real-world data sets demonstrate the excellent fair model
training performance of our method.