Multi-Task Consistency-Preserving Adversarial Hashing for Cross-Modal Retrieval

  • De Xie*
  • , Cheng Deng
  • , Chao Li
  • , Xianglong Liu
  • , Dacheng Tao
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Owing to the advantages of low storage cost and high query efficiency, cross-modal hashing has received increasing attention recently. As failing to bridge the inherent modality gap between modalities, most existing cross-modal hashing methods have limited capability to explore the semantic consistency information between different modality data, leading to unsatisfactory search performance. To address this problem, we propose a novel deep hashing method named Multi-Task Consistency-Preserving Adversarial Hashing (CPAH) to fully explore the semantic consistency and correlation between different modalities for efficient cross-modal retrieval. First, we design a consistency refined module (CR) to divide the representations of different modality into two irrelevant parts, i.e., modality-common and modality-private representations. Then, a multi-task adversarial learning module (MA) is presented, which can make the modality-common representation of different modalities close to each other on feature distribution and semantic consistency. Finally, the compact and powerful hash codes can be generated from modality-common representation. Comprehensive evaluations conducted on three representative cross-modal benchmark datasets illustrate our method is superior to the state-of-the-art cross-modal hashing methods.

Original languageEnglish
Article number8954946
Pages (from-to)3626-3637
Number of pages12
JournalIEEE Transactions on Image Processing
Volume29
DOIs
StatePublished - 2020

Keywords

  • Cross-modal retrieval
  • adversarial
  • consistency-preserving
  • hashing
  • multi-task

Fingerprint

Dive into the research topics of 'Multi-Task Consistency-Preserving Adversarial Hashing for Cross-Modal Retrieval'. Together they form a unique fingerprint.

Cite this