FedFair: Training Fair Models In Cross-Silo Federated Learning
Journal Articles
Overview
Research
View All
Overview
abstract
Building fair machine learning models becomes more and more important. As
many powerful models are built by collaboration among multiple parties, each
holding some sensitive data, it is natural to explore the feasibility of
training fair models in cross-silo federated learning so that fairness, privacy
and collaboration can be fully respected simultaneously. However, it is a very
challenging task, since it is far from trivial to accurately estimate the
fairness of a model without knowing the private data of the participating
parties. In this paper, we first propose a federated estimation method to
accurately estimate the fairness of a model without infringing the data privacy
of any party. Then, we use the fairness estimation to formulate a novel problem
of training fair models in cross-silo federated learning. We develop FedFair, a
well-designed federated learning framework, which can successfully train a fair
model with high performance without any data privacy infringement. Our
extensive experiments on three real-world data sets demonstrate the excellent
fair model training performance of our method.