Abstract
Federated learning is a newly developing distributed machine learning technology, which makes it possible for users to train machine learning models with decentralized privacy data. Owing to huge communication overhead, the traditional federated learning algorithm samples user data randomly, which may reduce the performance of the model due to the statistical heterogeneity of users. In this paper, we propose a distillation-based user selection algorithm for federated learning in heterogeneous situations. Based on knowledge distillation, the soft targets of users are uploaded to the server as a basis for user selection. Our algorithm reduces the statistical heterogeneity of selected users, resulting in low additional communication and computation overhead. Experiments implemented on MNIST and fashionMNIST show that the proposed algorithm obtains better model performance as compared to the federated averaging algorithm and several other user selection algorithms.
| Original language | English |
|---|---|
| Article number | 100007 |
| Journal | International Journal of Network Dynamics and Intelligence |
| Volume | 3 |
| Issue number | 2 |
| DOIs | |
| State | Published - 2024 |
Keywords
- federated learning
- knowledge distillation
- user selection
Fingerprint
Dive into the research topics of 'Distillation-Based User Selection for Heterogeneous Federated Learning'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver