Distillation-Based User Selection for Heterogeneous Federated Learning

  • Bowen Li
  • , Wenling Li*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Federated learning is a newly developing distributed machine learning technology, which makes it possible for users to train machine learning models with decentralized privacy data. Owing to huge communication overhead, the traditional federated learning algorithm samples user data randomly, which may reduce the performance of the model due to the statistical heterogeneity of users. In this paper, we propose a distillation-based user selection algorithm for federated learning in heterogeneous situations. Based on knowledge distillation, the soft targets of users are uploaded to the server as a basis for user selection. Our algorithm reduces the statistical heterogeneity of selected users, resulting in low additional communication and computation overhead. Experiments implemented on MNIST and fashionMNIST show that the proposed algorithm obtains better model performance as compared to the federated averaging algorithm and several other user selection algorithms.

Original languageEnglish
Article number100007
JournalInternational Journal of Network Dynamics and Intelligence
Volume3
Issue number2
DOIs
StatePublished - 2024

Keywords

  • federated learning
  • knowledge distillation
  • user selection

Fingerprint

Dive into the research topics of 'Distillation-Based User Selection for Heterogeneous Federated Learning'. Together they form a unique fingerprint.

Cite this