摘要
One vital challenge in federated learning (FL) is the statistical heterogeneity of data in different clients, which negatively affects the performance of the finally obtained model. One common approach to address this problem, called as personalized federated learning (PFL), is to train a personalized model for each client. A key design issue in PFL-based methods is determining which parts of the model should be personalized for each client. For example, one popular method in PFL is to personalize the batch normalization layers. In this paper, we propose ChannelFed, a new PFL-based method which personalizes the channel attention module. ChannelFed is designed based on the following observation: Channel attention assigns different weights to channels for different classes of data, which can be utilized to exploit knowledge of heterogeneous data from different clients. By keeping the channel attention module localized, ChannelFed enables clients to concentrate on client-specific channels. ChannelFed implements normalization across samples in the channel attention module to better fit for statistical heterogeneity scenarios. Experiments on CIFAR-10, Fashion-MNIST, and CIFAR-100 datasets demonstrate that ChannelFed outperforms other PFL methods under statistical heterogeneity scenarios.
| 源语言 | 英语 |
|---|---|
| 页(从-至) | 2987-2992 |
| 页数 | 6 |
| 期刊 | Proceedings - IEEE Global Communications Conference, GLOBECOM |
| DOI | |
| 出版状态 | 已出版 - 2022 |
| 活动 | 2022 IEEE Global Communications Conference, GLOBECOM 2022 - Rio de Janeiro, 巴西 期限: 4 12月 2022 → 8 12月 2022 |
指纹
探究 'ChannelFed: Enabling Personalized Federated Learning via Localized Channel Attention' 的科研主题。它们共同构成独一无二的指纹。引用此
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver