Skip to main navigation Skip to search Skip to main content

Sequential Model Averaging-Based Decentralized Learning for Low Overhead and Fast Convergence

Research output: Contribution to journalArticlepeer-review

Abstract

Decentralized learning can leverage computing resources at wireless edge, but has high signaling overhead and slow convergence when using existing methods for model averaging. In this letter, we propose a sequential model averaging method, aimed to reduce overhead and accelerate convergence. Specifically, each node transmits and computes sequentially along a spanning tree for model averaging instead of exchanging models with neighboring nodes. As a case study, we train a graph neural network to optimize power control. Simulation results demonstrate that decentralized learning using the proposed method only needs 0.5% ~ 19.9 % overhead and 0.4% ~ 14.9 % rounds of several state-of-the-art counterparts for convergence.

Original languageEnglish
Pages (from-to)71-75
Number of pages5
JournalIEEE Communications Letters
Volume30
DOIs
StatePublished - 2026

Keywords

  • Decentralized learning
  • convergence
  • model averaging
  • signaling overhead

Fingerprint

Dive into the research topics of 'Sequential Model Averaging-Based Decentralized Learning for Low Overhead and Fast Convergence'. Together they form a unique fingerprint.

Cite this