Home
Scholarly Works
Preventing Image Data Poisoning Attacks in...
Journal article

Preventing Image Data Poisoning Attacks in Federated Machine Learning by an Encrypted Verification Key

Abstract

Recent studies have uncovered security issues with most of the federated learning models. One common false assumption in the federated learning model is that participants are the attacker and would not use polluted data. This vulnerability enables attackers to train their models using polluted data and then send the polluted updates to the training server for aggregation, potentially poisoning the overall model. In such a setting, it is challenging for an edge server to thoroughly inspect the data used for model training and supervise any edge device. This study evaluates the vulnerabilities present in federated learning and explores various types of attacks that can occur. This paper presents a robust prevention scheme to address these vulnerabilities. The proposed prevention scheme enables federated learning servers to monitor participants actively in real time and identify infected individuals by introducing an encrypted verification scheme. The paper outlines the protocol design of this prevention scheme and presents experimental results that demonstrate its effectiveness.

Authors

Jodayree M; He W; Janicki R

Journal

Procedia Computer Science, Vol. 225, , pp. 2723–2732

Publisher

Elsevier

Publication Date

January 1, 2023

DOI

10.1016/j.procs.2023.10.264

ISSN

1877-0509

Contact the Experts team