Investigating adversarial attacks and defences in federated learning.

Lead Research Organisation: King's College London
Department Name: Informatics

Abstract

My research focuses on adversarial attacks and defences in federated learning and how they compare to those in the general ML domain.

As for adversarial attacks, federated learning introduces new attack surfaces such as allowing whitebox access to local models on clients, thus facilitating various attacks such as poisoning at training time and evasion at inference time. Of particular interest in federated learning settings is "model poisoning" which is a bigger threat than the traditional "data poisoning" attacks since an adversary can submit arbitrary updates to directly influence the global model. Another category of attacks targets the privacy/confidentiality of models, participants, or training data in FL settings.

Several defences have been proposed to defend against the various types of attacks. Example defences include robust aggregation methods, anomaly detection techniques, and differential privacy. Many of these methods were shown to be ineffective or easily circumventable, and some were shown to provide some mitigation but at the expense of model performance.

The research focus is to investigate ways to improve FL robustness to adversarial attacks (primarily poisoning) without harming model performance and while taking into account the non-IID nature of data and participants.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/V519546/1 01/10/2020 13/03/2026
2554063 Studentship EP/V519546/1 01/06/2021 26/04/2026 Mohamed Abouhashem