Multimodal feature fusion by relational reasoning and attention for visual question answering

  • Weifeng Zhang*
  • , Jing Yu
  • , Hua Hu
  • , Haiyang Hu
  • , Zengchang Qin
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

The recently emerged research of Visual Question Answering (VQA) has become a hot topic in computer vision. A key solution to VQA exists in how to fuse multimodal features extracted from image and question. In this paper, we show that combining visual relationship and attention together achieves more fine-grained feature fusion. Specifically, we design an effective and efficient module to reason complex relationship between visual objects. In addition, a bilinear attention module is learned for question guided attention on visual objects, which allows us to obtain more discriminative visual features. Given an image and a question in natural language, our VQA model learns visual relational reasoning network and attention network in parallel to fuse fine-grained textual and visual features, so that answers can be predicted accurately. Experimental results show that our approach achieves new state-of-the-art performance of single model on both VQA 1.0 and VQA 2.0 datasets.

Original languageEnglish
Pages (from-to)116-126
Number of pages11
JournalInformation Fusion
Volume55
DOIs
StatePublished - Mar 2020

Keywords

  • Attention mechanism
  • Multimodal fusion
  • Visual question answering
  • Visual relational reasoning

Fingerprint

Dive into the research topics of 'Multimodal feature fusion by relational reasoning and attention for visual question answering'. Together they form a unique fingerprint.

Cite this