跳到主要导航 跳到搜索 跳到主要内容

Graph Reinforcement Learning for Multi-Aircraft Conflict Resolution

  • Yumeng Li
  • , Yunhe Zhang
  • , Tong Guo
  • , Yu Liu
  • , Yisheng Lv
  • , Wenbo Du*
  • *此作品的通讯作者
  • Beihang University
  • Naval Aviation University
  • Tsinghua University
  • CAS - Institute of Automation

科研成果: 期刊稿件文章同行评审

摘要

The escalating density of airspace has led to sharply increased conflicts between aircraft. Efficient and scalable conflict resolution methods are crucial to mitigate collision risks. Existing learning-based methods become less effective as the scale of aircraft increases due to their redundant information representations. In this paper, to accommodate the increased airspace density, a novel graph reinforcement learning (GRL) method is presented to efficiently learn deconfliction strategies. A time-evolving conflict graph is exploited to represent the local state of individual aircraft and the global spatiotemporal relationships between them. Equipped with the conflict graph, GRL can efficiently learn deconfliction strategies by selectively aggregating aircraft state information through a multi-head attention-boosted graph neural network. Furthermore, a temporal regularization mechanism is proposed to enhance learning stability in highly dynamic environments. Comprehensive experimental studies have been conducted on an OpenAI Gym-based flight simulator. Compared with the existing state-of-the-art learning-based methods, the results demonstrate that GRL can save much training time while achieving significantly better deconfliction strategies in terms of safety and efficiency metrics. In addition, GRL has a strong power of scalability and robustness with increasing aircraft scale.

源语言英语
页(从-至)4529-4540
页数12
期刊IEEE Transactions on Intelligent Vehicles
9
3
DOI
出版状态已出版 - 1 3月 2024

指纹

探究 'Graph Reinforcement Learning for Multi-Aircraft Conflict Resolution' 的科研主题。它们共同构成独一无二的指纹。

引用此