Federated Learning Privacy
Federated Learning (FL) allows training to be performed in a distributed manner. This is however vulnerable to inference attacks or model poisoning. We study novel techniques to perform attacks, defenses and mitigations.
Publications
- Gradient Inversion of Federated Diffusion Models
- TS-Inverse: A Gradient Inversion Attack Tailored for Federated Time Series Forecasting Models
- On Quantifying the Gradient Inversion Risk of Data Reuse in Federated Learning Systems
- Conference: ISRDS 2024
- Authors: Jiyue Huang, Lydia Y. Chen, Stefanie Roos
- ��Code
- Fabricated Flips: Poisoning Federated Learning without Data
- Exploring and Exploiting Data-Free Model Stealing
- Conference: DSN 2023
- Authors: Chi Hong, Jiyue Huang, Robert Birke & Lydia Y. Chen
- ��Paper
- Defending Against Free-Riders Attacks in Distributed Generative Adversarial Networks
- LeadFL: Client Self-Defense against Model Poisoning in Federated Learning
- AGIC: Approximate Gradient Inversion Attack on Federated Learning
- Conference: SRDS 2022
- Authors: Jin Xu, Chi Hong, Jiyue Huang, Lydia Y. Chen, Jérémie Decouchant
- ��Paper