Abstract
In this paper, a model-free optimal synchronization controller is designed to achieve the aggressive attitude synchronization for multiple heterogeneous quadrotor systems with highly nonlinear and coupled dynamics by using a reinforcement learning (RL) approach. A distributed observer is first designed for each following quadrotor to estimate the states of a virtual leader. A performance function is then utilized for each quadrotor to penalize the observed synchronization error and the control effort. An RL approach is finally employed to learn the optimal control law without any knowledge of the dynamic model information of the followers. The control law depends on the quadrotor states and the observer states, and guarantees that the attitude synchronization error converges to zero for all quadrotors, under aggressive maneuvers. Simulation results are provided to verify the effectiveness of the proposed controller.
| Original language | English |
|---|---|
| Pages (from-to) | 2528-2547 |
| Number of pages | 20 |
| Journal | International Journal of Intelligent Systems |
| Volume | 36 |
| Issue number | 6 |
| DOIs | |
| State | Published - Jun 2021 |
Keywords
- attitude synchronization
- heterogeneous system
- multiagent system
- quadrotors
- reinforcement learning
Fingerprint
Dive into the research topics of 'Model-free attitude synchronization for multiple heterogeneous quadrotors via reinforcement learning'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver