Skip to main navigation Skip to search Skip to main content

Scene Graph Generation with Hierarchical Context

  • Guanghui Ren
  • , Lejian Ren
  • , Yue Liao
  • , Si Liu*
  • , Bo Li
  • , Jizhong Han
  • , Shuicheng Yan
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Scene graph generation has received increasing attention in recent years. Enhancing the predicate representations is an important entry point to this task. There are various methods to fully investigate the context of representation enhancement. In this brief, we analyze the decisive factors that can significantly affect the relation detection results. Our analysis shows that spatial correlations between objects, focused regions of objects, and global hints related to the relations have strong influences in relation prediction and contradiction elimination. Based on our analysis, we propose a hierarchical context network (HCNet) to generate a scene graph. HCNet consists of three contexts, including interaction context, depression context, and global context, which integrates information from pair, object, and graph levels. The experiments show that our method outperforms the state-of-the-art methods on the Visual Genome (VG) data set.

Original languageEnglish
Article number9084259
Pages (from-to)909-915
Number of pages7
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume32
Issue number2
DOIs
StatePublished - Feb 2021

Keywords

  • Attention mechanism
  • context aggregation
  • scene graph generation

Fingerprint

Dive into the research topics of 'Scene Graph Generation with Hierarchical Context'. Together they form a unique fingerprint.

Cite this