Skip to main navigation Skip to search Skip to main content

Zero-shot Scene Graph Generation via Triplet Calibration and Reduction

Research output: Contribution to journalArticlepeer-review

Abstract

Scene Graph Generation (SGG) plays a pivotal role in downstream vision-language tasks. Existing SGG methods typically suffer from poor compositional generalizations on unseen triplets. They are generally trained on incompletely annotated scene graphs that contain dominant triplets and tend to bias toward these seen triplets during inference. To address this issue, we propose a Triplet Calibration and Reduction (T-CAR) framework in this article. In our framework, a triplet calibration loss is first presented to regularize the representations of diverse triplets and to simultaneously excavate the unseen triplets in incompletely annotated training scene graphs. Moreover, the unseen space of scene graphs is usually several times larger than the seen space, since it contains a huge number of unrealistic compositions. Thus, we propose an unseen space reduction loss to shift the attention of excavation to reasonable unseen compositions to facilitate the model training. Finally, we propose a contextual encoder to improve the compositional generalizations of unseen triplets by explicitly modeling the relative spatial relations between subjects and objects. Extensive experiments show that our approach achieves consistent improvements for zero-shot SGG over state-of-the-art methods. The code is available at https://github.com/jkli1998/T-CAR.

Original languageEnglish
Article number5
JournalACM Transactions on Multimedia Computing, Communications and Applications
Volume20
Issue number1
DOIs
StatePublished - 24 Aug 2023

Keywords

  • Scene analysis and understanding
  • compositional zero-shot learning
  • scene graph generation

Fingerprint

Dive into the research topics of 'Zero-shot Scene Graph Generation via Triplet Calibration and Reduction'. Together they form a unique fingerprint.

Cite this