TY - GEN
T1 - Personalized Federated Learning via Dual Alignment of Semantic Knowledge and Feature Prototypes
AU - Sun, Bingli
AU - Tu, Yuchun
AU - Quan, Hongyan
AU - Song, Xiao
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2026.
PY - 2026
Y1 - 2026
N2 - Federated learning (FL) often suffers from client drift and inconsistent representations due to heterogeneous data distributions, limiting both generalization and personalization. Existing prototype-based methods partially address these issues but struggle to unify semantic representations across clients. In this paper, we propose FedCoAlign, a personalized federated learning (PFL) framework that jointly employs knowledge distillation and prototype alignment to enhance semantic consistency. Specifically, the global model output serves as soft labels to guide local training and mitigate client drift, while class prototypes are aligned with a global semantic space to improve representation consistency. Compared to existing personalized methods, FedCoAlign does not rely on complex task modeling but achieves performance improvements by introducing lightweight semantic consistency constraints. Experiments on multiple benchmarks show that FedCoAlign achieves superior performance and robustness, especially under highly heterogeneous scenarios, highlighting its effectiveness as a new paradigm for semantic-consistent personalization in federated learning.
AB - Federated learning (FL) often suffers from client drift and inconsistent representations due to heterogeneous data distributions, limiting both generalization and personalization. Existing prototype-based methods partially address these issues but struggle to unify semantic representations across clients. In this paper, we propose FedCoAlign, a personalized federated learning (PFL) framework that jointly employs knowledge distillation and prototype alignment to enhance semantic consistency. Specifically, the global model output serves as soft labels to guide local training and mitigate client drift, while class prototypes are aligned with a global semantic space to improve representation consistency. Compared to existing personalized methods, FedCoAlign does not rely on complex task modeling but achieves performance improvements by introducing lightweight semantic consistency constraints. Experiments on multiple benchmarks show that FedCoAlign achieves superior performance and robustness, especially under highly heterogeneous scenarios, highlighting its effectiveness as a new paradigm for semantic-consistent personalization in federated learning.
KW - Knowledge Distillation
KW - Non-IID Data
KW - Personalized Federated Learning
KW - Prototype Learning
UR - https://www.scopus.com/pages/publications/105023188090
U2 - 10.1007/978-981-95-4472-1_22
DO - 10.1007/978-981-95-4472-1_22
M3 - 会议稿件
AN - SCOPUS:105023188090
SN - 9789819544714
T3 - Communications in Computer and Information Science
SP - 252
EP - 258
BT - Methods and Applications for Modeling and Simulation of Complex Systems - 24th Asia Simulation Conference, AsiaSim 2025, Proceedings
A2 - Cai, Wentong
A2 - Low, Malcolm
A2 - Tan, Gary
A2 - D'Angelo, Gabriele
A2 - Ta, Duong
PB - Springer Science and Business Media Deutschland GmbH
T2 - 24th Asia Simulation Conference on Methods and Applications for Modeling and Simulation of Complex Systems, AsiaSim 2025
Y2 - 17 November 2025 through 19 November 2025
ER -