Abstract
Decentralized learning can leverage computing resources at wireless edge, but has high signaling overhead and slow convergence when using existing methods for model averaging. In this letter, we propose a sequential model averaging method, aimed to reduce overhead and accelerate convergence. Specifically, each node transmits and computes sequentially along a spanning tree for model averaging instead of exchanging models with neighboring nodes. As a case study, we train a graph neural network to optimize power control. Simulation results demonstrate that decentralized learning using the proposed method only needs 0.5% ~ 19.9 % overhead and 0.4% ~ 14.9 % rounds of several state-of-the-art counterparts for convergence.
| Original language | English |
|---|---|
| Pages (from-to) | 71-75 |
| Number of pages | 5 |
| Journal | IEEE Communications Letters |
| Volume | 30 |
| DOIs | |
| State | Published - 2026 |
Keywords
- Decentralized learning
- convergence
- model averaging
- signaling overhead
Fingerprint
Dive into the research topics of 'Sequential Model Averaging-Based Decentralized Learning for Low Overhead and Fast Convergence'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver