Abstract
This paper proposes a reinforcement learning (RL) based path following strategy for underactuated airships with magnitude and rate saturation. The Markov decision process (MDP) model for the control problem is established. Then an error bounded line-of-sight (LOS) guidance law is investigated to restrain the state space. Subsequently, a proximal policy optimization (PPO) algorithm is employed to approximate the optimal action policy through trial and error. Since the optimal action policy is generated from the action space, the magnitude and rate saturation can be avoided. The simulation results, involving circular, general, broken-line, and anti-wind path following tasks, demonstrate that the proposed control scheme can transfer to new tasks without adaptation, and possesses satisfying real-time performance and robustness.
| Original language | English |
|---|---|
| Article number | 7176 |
| Pages (from-to) | 1-18 |
| Number of pages | 18 |
| Journal | Sensors |
| Volume | 20 |
| Issue number | 24 |
| DOIs | |
| State | Published - 2 Dec 2020 |
Keywords
- Magnitude and rate saturation
- Path following
- Reinforcement learning
- Underactuated airships
Fingerprint
Dive into the research topics of 'Path following control for underactuated airships with magnitude and rate saturation'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver