跳到主要导航 跳到搜索 跳到主要内容

Reinforced GNNs for Multiple Instance Learning

  • Xusheng Zhao
  • , Qiong Dai*
  • , Xu Bai
  • , Jia Wu
  • , Hao Peng*
  • , Huailiang Peng
  • , Zhengtao Yu
  • , Philip S. Yu
  • *此作品的通讯作者
  • CAS - Institute of Information Engineering
  • University of Chinese Academy of Sciences
  • Macquarie University
  • Kunming University of Science and Technology
  • University of Illinois at Chicago

科研成果: 期刊稿件文章同行评审

摘要

Multiple instance learning (MIL) trains models from bags of instances, where each bag contains multiple instances, and only bag-level labels are available for supervision. The application of graph neural networks (GNNs) in capturing intrabag topology effectively improves MIL. Existing GNNs usually require filtering low-confidence edges among instances and adapting graph neural architectures to new bag structures. However, such asynchronous adjustments to structure and architecture are tedious and ignore their correlations. To tackle these issues, we propose a reinforced GNN framework for MIL (RGMIL), pioneering the exploitation of multiagent deep reinforcement learning (MADRL) in MIL tasks. MADRL enables the flexible definition or extension of factors that influence bag graphs or GNNs and provides synchronous control over them. Moreover, MADRL explores structure-to-architecture correlations while automating adjustments. Experimental results on multiple MIL datasets demonstrate that RGMIL achieves the best performance with excellent explainability. The code and data are available at https://github.com/RingBDStack/RGMIL.

源语言英语
页(从-至)6693-6707
页数15
期刊IEEE Transactions on Neural Networks and Learning Systems
36
4
DOI
出版状态已出版 - 2025

指纹

探究 'Reinforced GNNs for Multiple Instance Learning' 的科研主题。它们共同构成独一无二的指纹。

引用此