Abstract
Non-learning based motion and path planning of an Unmanned Aerial Vehicle (UAV) is faced with low computation efficiency, mapping memory occupation and local optimization problems. This article investigates the challenge of quadrotor control using offline reinforcement learning. By establishing a data-driven learning paradigm that operates without real-environment interaction, the proposed workflow offers a safer approach than traditional reinforcement learning, making it particularly suited for UAV control in industrial scenarios. The introduced algorithm evaluates dataset uncertainty and employs a pessimistic estimation to foster offline deep reinforcement learning. Experiments highlight the algorithm's superiority over traditional online reinforcement learning methods, especially when learning from offline datasets. Furthermore, the article emphasizes the importance of a more general behavior policy. In evaluations, the trained policy demonstrated versatility by adeptly navigating diverse obstacles, underscoring its real-world applicability.
| Original language | English |
|---|---|
| Pages (from-to) | 386-397 |
| Number of pages | 12 |
| Journal | Chinese Journal of Aeronautics |
| Volume | 37 |
| Issue number | 11 |
| DOIs | |
| State | Published - Nov 2024 |
Keywords
- Data-driven learning
- Markov decision process
- Motion planning
- Reinforcement learning
- Unmanned aerial vehicle
Fingerprint
Dive into the research topics of 'Data-driven offline reinforcement learning approach for quadrotor's motion and path planning'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver