Robust Incremental Learning of Approximate Dynamic Programming for Nonlinear Optimal Guidance

Research output: Contribution to journalArticlepeer-review

Abstract

Existing nonlinear guidance methods are difficult to reconcile performance optimality with stability assurance. This study proposes a concept of robust incremental learning for approximate optimal control of nonlinear terminal guidance problems. It transitions incrementally and stably from a traditional analytically formulated guidance law to an approximate optimal guidance policy. Specifically, we propose an incremental policy iteration algorithm, where a base guidance law is utilized to mitigate the initial instability and warm-start the learning process. Then, a robustness enhancement technique is proposed to theoretically guarantee the stability of learning process, where the guidance command is refined leveraging a virtual Lyapunov-based energy function. As a result, a robust and efficient learning method for nonlinear optimal guidance problems is developed. Simulation results for a specific impact-angle-constrained guidance problem verify advantages of the proposed method on efficiency, stability, and optimality.

Original languageEnglish
Pages (from-to)6041-6052
Number of pages12
JournalIEEE Transactions on Aerospace and Electronic Systems
Volume61
Issue number3
DOIs
StatePublished - 2025

Keywords

  • Approximate dynamic programming (ADP)
  • impact angle constraint
  • optimal control
  • optimal guidance
  • terminal guidance

Fingerprint

Dive into the research topics of 'Robust Incremental Learning of Approximate Dynamic Programming for Nonlinear Optimal Guidance'. Together they form a unique fingerprint.

Cite this