TY - JOUR
T1 - FedPDM
T2 - Representation enhanced federated learning with privacy preserving diffusion models
AU - Guo, Wei
AU - Zhuang, Fuzhen
AU - Tong, Yiqi
AU - Zhang, Xiao
AU - Hu, Zhaojun
AU - Zhao, Jiejie
AU - Dong, Jin
N1 - Publisher Copyright:
© 2026 Elsevier B.V.
PY - 2026/4/8
Y1 - 2026/4/8
N2 - Most existing semi-parameter-sharing federated learning (FL) frameworks utilize generative models to achieve partial parameter sharing with the server, which effectively enhances the data privacy of each client. However, these generative models often suffer from model utility degradation due to poor representation robustness. Meanwhile, representation inconsistency between local and global models exacerbates the client drift problem under non-IID scenarios. Furthermore, existing semi-parameter-sharing FL frameworks overlook representation leakage risks associated with generator sharing, while failing to balance privacy and utility. To alleviate these challenges, we propose FedPDM, a semi-parameter-sharing FL framework built upon a privacy-preserving diffusion model (PDM). Specifically, our proposed PDM enables model alignment with features from the privacy extractor without requiring direct exposure of this extractor, effectively mitigating utility degradation caused by poor representation robustness. Moreover, a feature-level penalty term is introduced into the optimization objective of PDM to avoid representation leakage. We further design a two-stage aggregation strategy that addresses representation inconsistency through initialization correction with a Gaussian constraint for knowledge distillation. Finally, we provide the first theoretical convergence analysis for semi-parameter-sharing FL, demonstrating that our framework converges at a rate of O(1/T). Extensive experiments on four datasets show that FedPDM achieves average accuracy improvements of 1.78% to 5.56% compared with various state-of-the-art baselines.
AB - Most existing semi-parameter-sharing federated learning (FL) frameworks utilize generative models to achieve partial parameter sharing with the server, which effectively enhances the data privacy of each client. However, these generative models often suffer from model utility degradation due to poor representation robustness. Meanwhile, representation inconsistency between local and global models exacerbates the client drift problem under non-IID scenarios. Furthermore, existing semi-parameter-sharing FL frameworks overlook representation leakage risks associated with generator sharing, while failing to balance privacy and utility. To alleviate these challenges, we propose FedPDM, a semi-parameter-sharing FL framework built upon a privacy-preserving diffusion model (PDM). Specifically, our proposed PDM enables model alignment with features from the privacy extractor without requiring direct exposure of this extractor, effectively mitigating utility degradation caused by poor representation robustness. Moreover, a feature-level penalty term is introduced into the optimization objective of PDM to avoid representation leakage. We further design a two-stage aggregation strategy that addresses representation inconsistency through initialization correction with a Gaussian constraint for knowledge distillation. Finally, we provide the first theoretical convergence analysis for semi-parameter-sharing FL, demonstrating that our framework converges at a rate of O(1/T). Extensive experiments on four datasets show that FedPDM achieves average accuracy improvements of 1.78% to 5.56% compared with various state-of-the-art baselines.
KW - Diffusion model
KW - Federated learning
KW - Privacy protection
KW - Split learning
UR - https://www.scopus.com/pages/publications/105029647565
U2 - 10.1016/j.knosys.2026.115452
DO - 10.1016/j.knosys.2026.115452
M3 - 文章
AN - SCOPUS:105029647565
SN - 0950-7051
VL - 338
JO - Knowledge-Based Systems
JF - Knowledge-Based Systems
M1 - 115452
ER -