Off-policy integral reinforcement learning algorithm in dealing with nonzero sum game for nonlinear distributed parameter systems

  • He Ren
  • , Jing Dai*
  • , Huaguang Zhang
  • , Kun Zhang
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Benefitting from the technology of integral reinforcement learning, the nonzero sum (NZS) game for distributed parameter systems is effectively solved in this paper when the information of system dynamics are unavailable. The Karhunen-Loève decomposition (KLD) is employed to convert the partial differential equation (PDE) systems into high-order ordinary differential equation (ODE) systems. Moreover, the off-policy IRL technology is introduced to design the optimal strategies for the NZS game. To confirm that the presented algorithm will converge to the optimal value functions, the traditional adaptive dynamic programming (ADP) method is first discussed. Then, the equivalence between the traditional ADP method and the presented off-policy method is proved. For implementing the presented off-policy IRL method, actor and critic neural networks are utilized to approach the value functions and control strategies in the iteration process, individually. Finally, a numerical simulation is shown to illustrate the effectiveness of the proposal off-policy algorithm.

Original languageEnglish
Pages (from-to)2919-2928
Number of pages10
JournalTransactions of the Institute of Measurement and Control
Volume42
Issue number15
DOIs
StatePublished - 1 Nov 2020
Externally publishedYes

Keywords

  • Integral reinforcement learning
  • adaptive dynamic programming
  • distributed parameter systems
  • nonzero sum game
  • off-policy algorithm

Fingerprint

Dive into the research topics of 'Off-policy integral reinforcement learning algorithm in dealing with nonzero sum game for nonlinear distributed parameter systems'. Together they form a unique fingerprint.

Cite this