TY - JOUR
T1 - Real-Time Navigation of Unmanned Ground Vehicles in Complex Terrains With Enhanced Perception and Memory-Guided Strategies
AU - Han, Zhixuan
AU - Chen, Peng
AU - Zhou, Bin
AU - Yu, Guizhen
N1 - Publisher Copyright:
© 1967-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - Accurate navigation of unmanned ground vehicles (UGVs) across challenging outdoor terrains demands precise maneuvering amidst diverse surface characteristics, while ensuring collision avoidance. Conventional navigation methodologies often struggle in dynamic environments due to their reliance on pre-mapped data and limitations in real-time multi-sensory data assimilation. To this end, this study proposes a methodology that integrates a diverse array of sensory inputs, including pose data, images, and point clouds, to enhance UGVs' situational awareness and decision-making capabilities. Central to our innovation is the development of a multi-modal reinforcement learning framework, which enhances UGVs' perceptual and decision-making ability. This framework incorporates a lattice-based motion planning algorithm, meticulously calibrated to optimize action selection while respecting UGVs' kinematic constraints. Additionally, a novel dual-training paradigm is introduced, combining curriculum learning and modal separation techniques to address the complexities of multi-modal learning. A notable contribution is the strategic integration of Long Short-Term Memory (LSTM) algorithms, which mitigate information decay and preserve essential navigational strategies over extended periods. The fusion of advanced perception and memory-guided strategies establishes a new standard for autonomous UGV navigation across diverse and unpredictable terrains.
AB - Accurate navigation of unmanned ground vehicles (UGVs) across challenging outdoor terrains demands precise maneuvering amidst diverse surface characteristics, while ensuring collision avoidance. Conventional navigation methodologies often struggle in dynamic environments due to their reliance on pre-mapped data and limitations in real-time multi-sensory data assimilation. To this end, this study proposes a methodology that integrates a diverse array of sensory inputs, including pose data, images, and point clouds, to enhance UGVs' situational awareness and decision-making capabilities. Central to our innovation is the development of a multi-modal reinforcement learning framework, which enhances UGVs' perceptual and decision-making ability. This framework incorporates a lattice-based motion planning algorithm, meticulously calibrated to optimize action selection while respecting UGVs' kinematic constraints. Additionally, a novel dual-training paradigm is introduced, combining curriculum learning and modal separation techniques to address the complexities of multi-modal learning. A notable contribution is the strategic integration of Long Short-Term Memory (LSTM) algorithms, which mitigate information decay and preserve essential navigational strategies over extended periods. The fusion of advanced perception and memory-guided strategies establishes a new standard for autonomous UGV navigation across diverse and unpredictable terrains.
KW - Unmanned ground vehicle
KW - complex terrain
KW - navigation
KW - reinforcement learning
KW - unknown environment
UR - https://www.scopus.com/pages/publications/105001075103
U2 - 10.1109/TVT.2024.3500002
DO - 10.1109/TVT.2024.3500002
M3 - 文章
AN - SCOPUS:105001075103
SN - 0018-9545
VL - 74
SP - 3723
EP - 3735
JO - IEEE Transactions on Vehicular Technology
JF - IEEE Transactions on Vehicular Technology
IS - 3
ER -