Continuous Control with Deep Reinforcement Learning for Mobile Robot Navigation

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Autonomous navigation is one of the focuses in the field of mobile robot research. The traditional method usually consists of two parts: building the map of environment, localization of mobile robot and path planning. However, these traditional methods usually rely on high-precision sensor information. At the same time, mobile robots have no intelligent understanding of autonomous navigation. In this article, a deep reinforcement learning method, i.e. soft actor critic, is used to navigate in a mapless environment. It takes laser scanning data and information of the target as input, outputs linear velocity and angular velocity in continuous space. The simulation shows that this learning-based end-to-end autonomous navigation method can accomplish tasks as well as traditional methods.

Original languageEnglish
Title of host publicationProceedings - 2019 Chinese Automation Congress, CAC 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1501-1506
Number of pages6
ISBN (Electronic)9781728140940
DOIs
StatePublished - Nov 2019
Event2019 Chinese Automation Congress, CAC 2019 - Hangzhou, China
Duration: 22 Nov 201924 Nov 2019

Publication series

NameProceedings - 2019 Chinese Automation Congress, CAC 2019

Conference

Conference2019 Chinese Automation Congress, CAC 2019
Country/TerritoryChina
CityHangzhou
Period22/11/1924/11/19

Keywords

  • Autonomous navigation
  • Deep Reinforcement Learning
  • Mobile Robot
  • Soft Actor Critic

Fingerprint

Dive into the research topics of 'Continuous Control with Deep Reinforcement Learning for Mobile Robot Navigation'. Together they form a unique fingerprint.

Cite this