Skip to main navigation Skip to search Skip to main content

ARMOR: Differential Model Distribution for Adversarially Robust Federated Learning

  • Beihang University
  • PowerTensors.AI

Research output: Contribution to journalArticlepeer-review

Abstract

In this work, we formalize the concept of differential model robustness (DMR), a new property for ensuring model security in federated learning (FL) systems. For most conventional FL frameworks, all clients receive the same global model. If there exists a Byzantine client who maliciously generates adversarial samples against the global model, the attack will be immediately transferred to all other benign clients. To address the attack transferability concern and improve the DMR of FL systems, we propose the notion of differential model distribution (DMD) where the server distributes different models to different clients. As a concrete instantiation of DMD, we propose the ARMOR framework utilizing differential adversarial training to prevent a corrupted client from launching white-box adversarial attack against other clients, for the local model received by the corrupted client is different from that of benign clients. Through extensive experiments, we demonstrate that ARMOR can significantly reduce both the attack success rate (ASR) and average adversarial transfer rate (AATR) across different FL settings. For instance, for a 35-client FL system, the ASR and AATR can be reduced by as much as 85% and 80% over the MNIST dataset.

Original languageEnglish
Article number842
JournalElectronics (Switzerland)
Volume12
Issue number4
DOIs
StatePublished - Feb 2023

Keywords

  • Byzantine robustness
  • adversarial training
  • differential model distribution
  • federated learning
  • model robustness

Fingerprint

Dive into the research topics of 'ARMOR: Differential Model Distribution for Adversarially Robust Federated Learning'. Together they form a unique fingerprint.

Cite this