Home
Scholarly Works
Preventing Text Data Poisoning Attacks in...
Chapter

Preventing Text Data Poisoning Attacks in Federated Machine Learning by an Encrypted Verification Key

Abstract

Recent studies show significant security problems with most of the Federated Learning models. There is a false assumption that the participant is not the attacker and would not use poisoned data. This vulnerability allows attackers to use polluted data to train their data locally and send the model updates to the edge server for aggregation, which generates an opportunity for data poisoning. In such a setting, it is challenging for an edge server to thoroughly examine the data used for model training and supervise any edge device. This paper evaluates existing vulnerabilities, attacks, and defenses of federated learning, discusses the hazard of data poisoning and backdoor attacks in federated learning, and proposes a robust scheme to prevent any categories of data poisoning attacks on text data. A new two-phase strategy and encryption algorithms allow Federated Learning servers to supervise participants in real-time and eliminate infected participants by adding an encrypted verification scheme to the Federated Learning mode. This paper includes the protocol design of the prevention scheme and presents the experimental results demonstrating this scheme’s effectiveness.

Authors

Jodayree M; He W; Janicki R

Book title

Rough Sets

Series

Lecture Notes in Computer Science

Volume

14481

Pagination

pp. 612-626

Publisher

Springer Nature

Publication Date

January 1, 2023

DOI

10.1007/978-3-031-50959-9_42
View published work (Non-McMaster Users)

Contact the Experts team