TY - GEN
T1 - Enabling Communication-efficient and Robust Federated Learning over Packet Lossy Networks via Random Interleaved Vector Quantization
AU - Guan, Yixuan
AU - Niu, Jianwei
AU - Ren, Tao
AU - Liu, Xuefeng
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - In packet erasure networks, federated learning (FL) typically suffers more prohibitive communication overhead from massive retransmissions of high-dimensional gradients. As a result, recent studies are dedicated to developing retransmission-free gradient compression techniques with erasure resilience. Nonetheless, two limitations remain unsolved: existing works neither explore why packet erasure degrades the performance of FL nor exploit the spatial correlations among gradient entries for better compression. In this paper, we investigate FL performance degradation via analyzing model updating deviation and find that the deviation is exacerbated by dependencies among lost gradient entries. On top of this observation, we propose FedRIVQ, a communication-efficient and robust FL framework taking a customized compressor termed random interleaved vector quantization (VQ). FedRIVQ leverages the spatial correlations among gradient entries with VQ and randomly interleaves these entries prior to VQ to eliminate their dependencies. These innovations allow all gradient entries to share an identical erasure probability, thereby packet erasure is equivalent to random erasure, which significantly improves both communication efficiency and the robustness of FL. Theoretical analysis and experimental results consistently demonstrate the effectiveness of our designs.
AB - In packet erasure networks, federated learning (FL) typically suffers more prohibitive communication overhead from massive retransmissions of high-dimensional gradients. As a result, recent studies are dedicated to developing retransmission-free gradient compression techniques with erasure resilience. Nonetheless, two limitations remain unsolved: existing works neither explore why packet erasure degrades the performance of FL nor exploit the spatial correlations among gradient entries for better compression. In this paper, we investigate FL performance degradation via analyzing model updating deviation and find that the deviation is exacerbated by dependencies among lost gradient entries. On top of this observation, we propose FedRIVQ, a communication-efficient and robust FL framework taking a customized compressor termed random interleaved vector quantization (VQ). FedRIVQ leverages the spatial correlations among gradient entries with VQ and randomly interleaves these entries prior to VQ to eliminate their dependencies. These innovations allow all gradient entries to share an identical erasure probability, thereby packet erasure is equivalent to random erasure, which significantly improves both communication efficiency and the robustness of FL. Theoretical analysis and experimental results consistently demonstrate the effectiveness of our designs.
KW - Communication Overhead
KW - Federated Learning
KW - Packet Erasure
KW - Random Interleaving
KW - Vector Quantization
UR - https://www.scopus.com/pages/publications/105022627476
U2 - 10.1109/ICME59968.2025.11209237
DO - 10.1109/ICME59968.2025.11209237
M3 - 会议稿件
AN - SCOPUS:105022627476
T3 - Proceedings - IEEE International Conference on Multimedia and Expo
BT - 2025 IEEE International Conference on Multimedia and Expo
PB - IEEE Computer Society
T2 - 2025 IEEE International Conference on Multimedia and Expo, ICME 2025
Y2 - 30 June 2025 through 4 July 2025
ER -