Model-free attitude synchronization for multiple heterogeneous quadrotors via reinforcement learning

  • Wanbing Zhao
  • , Hao Liu*
  • , Bohui Wang
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, a model-free optimal synchronization controller is designed to achieve the aggressive attitude synchronization for multiple heterogeneous quadrotor systems with highly nonlinear and coupled dynamics by using a reinforcement learning (RL) approach. A distributed observer is first designed for each following quadrotor to estimate the states of a virtual leader. A performance function is then utilized for each quadrotor to penalize the observed synchronization error and the control effort. An RL approach is finally employed to learn the optimal control law without any knowledge of the dynamic model information of the followers. The control law depends on the quadrotor states and the observer states, and guarantees that the attitude synchronization error converges to zero for all quadrotors, under aggressive maneuvers. Simulation results are provided to verify the effectiveness of the proposed controller.

Original languageEnglish
Pages (from-to)2528-2547
Number of pages20
JournalInternational Journal of Intelligent Systems
Volume36
Issue number6
DOIs
StatePublished - Jun 2021

Keywords

  • attitude synchronization
  • heterogeneous system
  • multiagent system
  • quadrotors
  • reinforcement learning

Fingerprint

Dive into the research topics of 'Model-free attitude synchronization for multiple heterogeneous quadrotors via reinforcement learning'. Together they form a unique fingerprint.

Cite this