Security and Privacy in Federated Learning

Lead Research Organisation: University College London
Department Name: Computer Science

Abstract

Federated machine learning is increasingly being deployed in the wild. Because the training data either never leaves the users' machines or is never exposed to untrusted parties, these techniques are a good fit for scenarios where data is sensitive and the participants want to construct a joint model without disclosing their datasets. However, since model updates/outputs are still indirectly based on the training data, we need to measure whether and how much we can prevent the leakage of unintended information about the participants' training data; and, if so, what can be done to mitigate such leakage. Concretely, we plan to look for leaky inferences in real-world settings and investigate defenses, including training regularizers, such as Dropout or Weight Normalization, user-level differential privacy, adversarial learning techniques, as well as relying on black-box trusted servers. In particular, with respect to the latter, and overall centralized machine learning, we plan to study its trade-offs compared to federated learning based on communication/computation overhead incurred on end-users and the privacy/security guarantees it offers.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/T517628/1 01/10/2019 30/09/2024
2794921 Studentship EP/T517628/1 01/11/2019 31/10/2023 Mohammad Naseri