Skip to main navigation Skip to search Skip to main content

Communication-Assisted Multi-Agent Reinforcement Learning Improves Task-Offloading in UAV-Aided Edge-Computing Networks

  • Siyang Tan
  • , Binqiang Chen
  • , Dong Liu*
  • , Jianglong Zhang
  • , Lajos Hanzo
  • *Corresponding author for this work
  • 3rd Department of Technology
  • Beihang University
  • University of Southampton

Research output: Contribution to journalArticlepeer-review

Abstract

Equipping unmanned aerial vehicles (UAVs) with computing servers allows the ground-users to offload complex tasks to the UAVs, but the trajectory optimization of UAVs is critical for fully exploiting their maneuverability. Existing studies either employ a centralized controller having prohibitive communication overhead, or fail to glean the benefits of interaction and coordination among agents. To circumvent this impediment, we propose to intelligently exchange critical information among agents for assisting their decision-making. We first formulate a problem for maximizing the number of offloaded tasks and the offloading fairness by optimizing the trajectory of UAVs. We then conceive a multi-agent deep reinforcement learning (DRL) framework by harnessing communication among agents, and design a communication-assisted decentralized trajectory control algorithm based on value-decomposition networks (VDN) for fully exploiting the benefits of messages exchange among agents. Simulation results demonstrate the superiority of the proposed algorithm over the state-of-the-art DRL-based algorithms.

Original languageEnglish
Pages (from-to)2233-2237
Number of pages5
JournalIEEE Wireless Communications Letters
Volume12
Issue number12
DOIs
StatePublished - 1 Dec 2023

Keywords

  • Multi-agent reinforcement learning
  • trajectory planning
  • UAV

Fingerprint

Dive into the research topics of 'Communication-Assisted Multi-Agent Reinforcement Learning Improves Task-Offloading in UAV-Aided Edge-Computing Networks'. Together they form a unique fingerprint.

Cite this