Heuristic Q-learning based on experience replay for three-dimensional path planning of the unmanned aerial vehicle

Research output: Contribution to journalArticlepeer-review

Abstract

In order to solve the problem that the existing reinforcement learning algorithm is difficult to converge due to the excessive state space of the three-dimensional path planning of the unmanned aerial vehicle, this article proposes a reinforcement learning algorithm based on the heuristic function and the maximum average reward value of the experience replay mechanism. The knowledge of track performance is introduced to construct heuristic function to guide the unmanned aerial vehicles’ action selection and reduce the useless exploration. Experience replay mechanism based on maximum average reward increases the utilization rate of excellent samples and the convergence speed of the algorithm. The simulation results show that the proposed three-dimensional path planning algorithm has good learning efficiency, and the convergence speed and training performance are significantly improved.

Original languageEnglish
JournalScience Progress
Volume103
Issue number1
DOIs
StatePublished - Jan 2020

Keywords

  • Path planning
  • Q-learning
  • experience replay
  • heuristic information
  • unmanned aerial vehicle

Fingerprint

Dive into the research topics of 'Heuristic Q-learning based on experience replay for three-dimensional path planning of the unmanned aerial vehicle'. Together they form a unique fingerprint.

Cite this