Abstract
Coordination optimization plays a vital role in complex multi-agent systems, and Multi-Agent Reinforcement Learning (MARL) has emerged as a widely adopted solution. However, MARL still faces significant challenges in this domain, including low coordination efficiency and inaccurate value estimation. To address these issues, we propose MVAPO, a novel Multi-Head Joint Value Attention-based Policy Optimization algorithm that improves policy learning through enhanced value approximation and selective attention to agent contributions. The key innovation of MVAPO lies in the introduction of a joint value network augmented with a multi-head attention mechanism. In this mechanism, context-aware team rewards serve as query inputs, directing attention to the most relevant agents in different situations. This allows the model to dynamically focus on the agents that are most critical at any given time, thus improving coordination efficiency and the accuracy of value estimates. Furthermore, MVAPO incorporates feedforward and residual layers, eliminating linear and monotonic constraints, which significantly enhances its representational capacity. Extensive experiments on a multi-UAV benchmark across a variety of scenarios demonstrate that MVAPO consistently outperforms state-of-the-art methods in both reward acquisition and win rates, highlighting its superior performance and robustness.
| Original language | English |
|---|---|
| Article number | 102132 |
| Journal | Swarm and Evolutionary Computation |
| Volume | 99 |
| DOIs | |
| State | Published - Dec 2025 |
Keywords
- Multi-agent coordination
- Multi-agent deep reinforcement learning
- Multi-head value attention mechanisms
- Policy optimization
- Value decomposition
Fingerprint
Dive into the research topics of 'An attention-based joint value estimation strategy for multi-agent coordination optimization'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver