Federated Learning (FL) allows training to be performed in a distributed manner. This is however vulnerable to model poisoning, inversion attack or inference attacks, under different adversarial assumptions. We study novel techniques to perform attacks, defenses and mitigations.


Publications


Gradient Inversion of Federated Diffusion Models

Jiyue Huang, Chi Hong, Stefanie Roos and Lydia Y. Chen

ARES 2025: 📄 Paper | 💻 Code

Citation
@inproceedings{huang2025gidm,
    author      = {Jiyue Huang and
                   Chi Hong and
                   Stefanie Ross and
                   Lydia Y. Chen},
    title       = {Gradient Inversion of Federated Diffusion Models},
    booktitle   = {Proceedings of the 19th International Conference on Availability, Reliability and Security, {ARES} 2025},
    publisher   = ,
    year        = {2025}
}

TS-Inverse: A Gradient Inversion Attack Tailored for Federated Time Series Forecasting Models

Caspar Meijer, Jiyue Huang, Shreshtha Sharma, Elena Lazovik, Lydia Y. Chen

SaTML 2025: 📄 Paper | 💻 Code

Citation
@inproceedings{meijer2025tsinverse,
    author      = {Caspar Meijer and
                   Jiyue Huang and
                   Shreshtha Sharma and
                   Elena Lazovik and
                   Lydia Y. Chen},
    title       = {TS-Inverse: {A} Gradient Inversion Attack Tailored for Federated Time Series Forecasting Models},
    booktitle   = {IEEE Conference on Secure and Trustworthy Machine Learning, SaTML 2025},
    publisher   = {IEEE},
    year        = {2025},
    doi         = {10.1109/SATML64287.2025.00014}
}

On Quantifying the Gradient Inversion Risk of Data Reuse in Federated Learning Systems

Jiyue Huang, Lydia Y. Chen, Stefanie Roos

SRDS 2024: 📄 Paper | 💻 Code

Citation
@inproceedings{huang2024cgi,
    author      = {Jiyue Huang and
                   Lydia Y. Chen and
                   Stefanie Roos},
    title       = {On Quantifying the Gradient Inversion Risk of Data Reuse in Federated Learning Systems},
    booktitle   = {43rd International Symposium on Reliable Distributed Systems, {SRDS} 2024},
    publisher   = {IEEE},
    year        = {2024},
    doi         = {10.1109/SRDS64841.2024.00031}
}

Fabricated Flips: Poisoning Federated Learning without Data

Jiyue Huang, Zilong Zhao, Lydia Y. Chen, Stefanie Roos

DSN 2023: 📄 Paper | 💻 Code

Citation
@inproceedings{huang2024dfa,
    author      = {Jiyue Huang and
                   Zilong Zhao and
                   Lydia Y. Chen and
                   Stefanie Roos},
    title       = {Fabricated Flips: Poisoning Federated Learning without Data},
    booktitle   = {53rd Annual {IEEE/IFIP} International Conference on Dependable Systems and Network, {DSN} 2023},
    publisher   = ,
    year        = {2023},
    doi         = {10.1109/DSN58367.2023.00036}
}

LeadFL: Client Self-Defense against Model Poisoning in Federated Learning

Chaoyi Zhu, Stefanie Roos, Lydia Y. Chen

ICML 2023: 📄 Paper | 💻 Code | 🖼️ Poster

Citation
@inproceedings{zhu2023leadfl,
    author      = {Chaoyi Zhu and
                   Stefanie Roos and
                   Lydia Y. Chen},
    title       = {LeadFL: Client Self-Defense against Model Poisoning in Federated Learning},
    booktitle   = {International Conference on Machine Learning, {ICML} 2023},
    series      = {Proceedings of Machine Learning Research},
    publisher   = ,
    year        = {2023}
}

Defending Against Free-Riders Attacks in Distributed Generative Adversarial Networks

Zilong Zhao, Jiyue Huang, Lydia Y Chen, Stefanie Roos

FC 2023: 📄 Paper | 💻 Code

Citation
@inproceedings{DBLP:conf/fc/ZhaoHCR23,
  author       = {Zilong Zhao and
                  Jiyue Huang and
                  Lydia Y. Chen and
                  Stefanie Roos},
  editor       = {Foteini Baldimtsi and
                  Christian Cachin},
  title        = {Defending Against Free-Riders Attacks in Distributed Generative Adversarial
                  Networks},
  booktitle    = {International Conference Financial Cryptography and Data Security},
  series       = {Lecture Notes in Computer Science},
  volume       = {13951},
  pages        = {200--217},
  publisher    = {Springer},
  year         = {2023},
}

Exploring and Exploiting Data-Free Model Stealing

Chi Hong, Jiyue Huang, Robert Birke & Lydia Y. Chen

ECML 2023: 📄 Paper

Citation
@inproceedings{DBLP:conf/pkdd/HongHBC23,
  author       = {Chi Hong and
                  Jiyue Huang and
                  Robert Birke and
                  Lydia Y. Chen},
  editor       = {Danai Koutra and
                  Claudia Plant and
                  Manuel Gomez Rodriguez and
                  Elena Baralis and
                  Francesco Bonchi},
  title        = {Exploring and Exploiting Data-Free Model Stealing},
  booktitle    = {European Conference on Machine Learning and Knowledge Discovery},
  series       = {Lecture Notes in Computer Science},
  volume       = {14173},
  pages        = {20--35},
  publisher    = {Springer},
  year         = {2023},
}

Defending Against Free-Riders Attacks in Distributed Generative Adversarial Networks

Zilong Zhao, Jiyue Huang, Lydia Y. Chen, and Stefanie Roos

FC 2023: 📄 Paper | 💻 Code

Citation
@inproceedings{DBLP:conf/fc/ZhaoHCR23,
  author       = {Zilong Zhao and
                  Jiyue Huang and
                  Lydia Y. Chen and
                  Stefanie Roos},
  editor       = {Foteini Baldimtsi and
                  Christian Cachin},
  title        = {Defending Against Free-Riders Attacks in Distributed Generative Adversarial
                  Networks},
  booktitle    = {International Conference Financial Cryptography and Data Security},
  series       = {Lecture Notes in Computer Science},
  volume       = {13951},
  pages        = {200--217},
  publisher    = {Springer},
  year         = {2023},
}

AGIC: Approximate Gradient Inversion Attack on Federated Learning

Jin Xu, Chi Hong, Jiyue Huang, Lydia Y. Chen, Jérémie Decouchant

SRDS22: 📄 Paper

Citation
@inproceedings{DBLP:conf/srds/XuHHCD22,
  author       = {Jin Xu and
                  Chi Hong and
                  Jiyue Huang and
                  Lydia Y. Chen and
                 Jérémie Decouchant},
  title        = {AGIC: Approximate Gradient Inversion Attack on Federated Learning},
  booktitle    = {41st International Symposium on Reliable Distributed Systems, {SRDS}},
  pages        = {12--22},
  publisher    = ,
  year         = {2022},
}