Abstract
This paper proposes a novel approach to multiple agent systems (MASs) safe consensus control using typical artificial intelligence reinforcement learning interacted with human feedback. Primarily, an adaptive neural network is employed to approximate the Hamilton Jacobi Bellman equation, specifically tailored for nonlinear MASs representing specific consensus behavior. Additionally, the optimized control design incorporates an actor critic architecture responsible for both executing behaviors and evaluating control performance. The structure provides feedback to the actor, alleviating the condition of persistent excitation. Notably, the designed mechanism, named the human–automation mechanism, which leverages online interaction to improve computational efficiency and accuracy. A comprehensive theoretical analysis is presented, demonstrating the guaranteed convergence in reinforcement learning based on human feedback to improve the interpretability of models. In practical application scenarios, static obstacles are considered to evaluate the performance of security tracking. The experimental results demonstrate that the proposed algorithm is more secure than advanced algorithms.
| Original language | English |
|---|---|
| Article number | 132618 |
| Journal | Neurocomputing |
| Volume | 671 |
| DOIs | |
| State | Published - 28 Mar 2026 |
Keywords
- Human feedback
- Multi-agent systems
- Reinforcement learning
- Safe consensus control
Fingerprint
Dive into the research topics of 'Human feedback reinforcement learning for multiple agent systems online safe consensus control'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver