Experts has a new look! Let us know what you think of the updates.

Provide feedback
Home
Scholarly Works
Preventing Text Data Poisoning Attacks in...
Chapter

Preventing Text Data Poisoning Attacks in Federated Machine Learning by an Encrypted Verification Key

Abstract

Recent studies show significant security problems with most of the Federated Learning models. There is a false assumption that the participant is not the attacker and would not use poisoned data. This vulnerability allows attackers to use polluted data to train their data locally and send the model updates to the edge server for aggregation, which generates an opportunity for data poisoning. In such a setting, it is challenging for an edge …

Authors

Jodayree M; He W; Janicki R

Book title

Rough Sets

Series

Lecture Notes in Computer Science

Volume

14481

Pagination

pp. 612-626

Publisher

Springer Nature

Publication Date

2023

DOI

10.1007/978-3-031-50959-9_42