跳到主要导航 跳到搜索 跳到主要内容

Deep hashing by discriminating hard examples

  • Cheng Yan
  • , Guansong Pang
  • , Xiao Bai*
  • , Chunhua Shen
  • , Jun Zhou
  • , Edwin Hancock
  • *此作品的通讯作者
  • Beihang University
  • Adelaide University
  • Griffith University Queensland
  • University of York

科研成果: 书/报告/会议事项章节会议稿件同行评审

摘要

This paper tackles a rarely explored but critical problem within learning to hash, i.e., to learn hash codes that effectively discriminate hard similar and dissimilar examples, to empower large-scale image retrieval. Hard similar examples refer to image pairs from the same semantic class that demonstrate some shared appearance but have different fine-grained appearance. Hard dissimilar examples are image pairs that come from different semantic classes but exhibit similar appearance. These hard examples generally have a small distance due to the shared appearance. Therefore, effective encoding of the hard examples can well discriminate the relevant images within a small Hamming distance, enabling more accurate retrieval in the top-ranked returned images. However, most existing hashing methods cannot capture this key information as their optimization is dominated by easy examples, i.e., distant similar/dissimilar pairs that share no or limited appearance. To address this problem, we introduce a novel Gamma distribution-enabled and symmetric Kullback-Leibler divergence-based loss, which is dubbed dual hinge loss because it works similarly as imposing two smoothed hinge losses on the respective similar and dissimilar pairs. Specifically, the loss enforces exponentially variant penalization on the hard similar (dissimilar) examples to emphasize and learn their fine-grained difference. It meanwhile imposes a bounding penalization on easy similar (dissimilar) examples to prevent the dominance of the easy examples in the optimization while preserving the high-level similarity (dissimilarity). This enables our model to well encode the key information carried by both easy and hard examples. Extensive empirical results on three widely-used image retrieval datasets show that (i) our method consistently and substantially outperforms state-of-the-art competing methods using hash codes of the same length and (ii) our method can use significantly (e.g., 50%-75%) shorter hash codes to perform substantially better than, or comparably well to, the competing methods.

源语言英语
主期刊名MM 2019 - Proceedings of the 27th ACM International Conference on Multimedia
出版商Association for Computing Machinery, Inc
1535-1542
页数8
ISBN(电子版)9781450368896
DOI
出版状态已出版 - 15 10月 2019
活动27th ACM International Conference on Multimedia, MM 2019 - Nice, 法国
期限: 21 10月 201925 10月 2019

出版系列

姓名MM 2019 - Proceedings of the 27th ACM International Conference on Multimedia

会议

会议27th ACM International Conference on Multimedia, MM 2019
国家/地区法国
Nice
时期21/10/1925/10/19

指纹

探究 'Deep hashing by discriminating hard examples' 的科研主题。它们共同构成独一无二的指纹。

引用此