Federated Learning Privacy

Federated Learning (FL) allows training to be performed in a distributed manner. This is however vulnerable to inference attacks or model poisoning. We study novel techniques to perform attacks, defenses and mitigations.

Publications

  1. Gradient Inversion of Federated Diffusion Models
    • Conference: ARES 2025
    • Authors: Jiyue Huang, Chi Hong, Stefanie Roos and Lydia Y. Chen
    • ��Paper ��Code
  2. TS-Inverse: A Gradient Inversion Attack Tailored for Federated Time Series Forecasting Models
    • Conference: SaTML 2025
    • Authors: Caspar Meijer, Jiyue Huang, Shreshtha Sharma, Elena Lazovik, Lydia Y Chen
    • ��Paper ��Code
  3. On Quantifying the Gradient Inversion Risk of Data Reuse in Federated Learning Systems
    • Conference: ISRDS 2024
    • Authors: Jiyue Huang, Lydia Y. Chen, Stefanie Roos
    • ��Code
  4. Fabricated Flips: Poisoning Federated Learning without Data
    • Conference: DSN 2023
    • Authors: Jiyue Huang, Zilong Zhao, Lydia Y. Chen, Stefanie Roos
    • ��Paper ��Code
  5. Exploring and Exploiting Data-Free Model Stealing
    • Conference: DSN 2023
    • Authors: Chi Hong, Jiyue Huang, Robert Birke & Lydia Y. Chen
    • ��Paper
  6. Defending Against Free-Riders Attacks in Distributed Generative Adversarial Networks
    • Conference: FC 2023
    • Authors: Zilong Zhao, Jiyue Huang, Lydia Y Chen, Stefanie Roos
    • ��Paper ��Code
  7. LeadFL: Client Self-Defense against Model Poisoning in Federated Learning
  8. AGIC: Approximate Gradient Inversion Attack on Federated Learning
    • Conference: SRDS 2022
    • Authors: Jin Xu, Chi Hong, Jiyue Huang, Lydia Y. Chen, Jérémie Decouchant
    • ��Paper