TY - GEN
T1 - Fitness-Driven Evolutionary Federated Learning
T2 - 32nd International Conference on Neural Information Processing, ICONIP 2025
AU - Yu, Yichun
AU - Lan, Yuqing
AU - Yang, Xiaoyi
AU - Xing, Zhihuan
AU - Zheng, Han
AU - Yu, Dan
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2026.
PY - 2026
Y1 - 2026
N2 - Federated Learning (FL) enables decentralized nodes to collaboratively train models while maintaining data privacy. However, traditional FL methods often face significant challenges, including high communication overhead and slow convergence, especially when data across nodes is heterogeneous. To address these issues, this paper introduces a novel framework named Fitness-Driven Evolutionary Federated Learning (FD-EFL), which effectively combines evolutionary strategies (ES) with adaptive client selection and dynamic population adjustment. The primary innovation of FD-EFL is its fitness-driven information-sharing approach, wherein clients communicate only concise fitness metrics representing similarities between their local models and a noise-perturbed global population. This significantly reduces communication overhead. FD-EFL further enhances performance by adaptively selecting high-quality clients, effectively minimizing the impact of noisy or low-quality updates. Additionally, the framework integrates a dynamic population adjustment mechanism guided by the Critical Learning Period (CLP), dynamically expanding the population size during critical training phases to improve model accuracy, and shrinking it during non-critical phases to save communication resources. Experimental evaluations demonstrate that FD-EFL substantially lowers communication costs without compromising model accuracy, achieving comparable performance to established methods like FedAvg. Our proposed framework offers a practical and efficient solution for federated learning in heterogeneous data environments. The implementation is publicly available at: https://github.com/buaaYYC/Fed-FEL.
AB - Federated Learning (FL) enables decentralized nodes to collaboratively train models while maintaining data privacy. However, traditional FL methods often face significant challenges, including high communication overhead and slow convergence, especially when data across nodes is heterogeneous. To address these issues, this paper introduces a novel framework named Fitness-Driven Evolutionary Federated Learning (FD-EFL), which effectively combines evolutionary strategies (ES) with adaptive client selection and dynamic population adjustment. The primary innovation of FD-EFL is its fitness-driven information-sharing approach, wherein clients communicate only concise fitness metrics representing similarities between their local models and a noise-perturbed global population. This significantly reduces communication overhead. FD-EFL further enhances performance by adaptively selecting high-quality clients, effectively minimizing the impact of noisy or low-quality updates. Additionally, the framework integrates a dynamic population adjustment mechanism guided by the Critical Learning Period (CLP), dynamically expanding the population size during critical training phases to improve model accuracy, and shrinking it during non-critical phases to save communication resources. Experimental evaluations demonstrate that FD-EFL substantially lowers communication costs without compromising model accuracy, achieving comparable performance to established methods like FedAvg. Our proposed framework offers a practical and efficient solution for federated learning in heterogeneous data environments. The implementation is publicly available at: https://github.com/buaaYYC/Fed-FEL.
KW - Client Selection
KW - Communication Efficiency
KW - Evolutionary Strategies
KW - Federated Learning
KW - Heterogeneous Data
UR - https://www.scopus.com/pages/publications/105022099608
U2 - 10.1007/978-981-95-4384-7_18
DO - 10.1007/978-981-95-4384-7_18
M3 - 会议稿件
AN - SCOPUS:105022099608
SN - 9789819543830
T3 - Lecture Notes in Computer Science
SP - 250
EP - 264
BT - Neural Information Processing - 32nd International Conference, ICONIP 2025, Proceedings
A2 - Taniguchi, Tadahiro
A2 - Leung, Chi Sing Andrew
A2 - Kozuno, Tadashi
A2 - Yoshimoto, Junichiro
A2 - Mahmud, Mufti
A2 - Doborjeh, Maryam
A2 - Doya, Kenji
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 20 November 2025 through 24 November 2025
ER -