Skip to main navigation Skip to search Skip to main content

Generate Transferable Adversarial Physical Camouflages via Triplet Attention Suppression

  • Jiakai Wang
  • , Xianglong Liu*
  • , Zixin Yin
  • , Yuxuan Wang
  • , Jun Guo
  • , Haotong Qin
  • , Qingtao Wu
  • , Aishan Liu
  • *Corresponding author for this work
  • Zhongguancun Laboratory
  • Beihang University
  • Henan University of Science and Technology

Research output: Contribution to journalArticlepeer-review

Abstract

Deep learning models are vulnerable to adversarial examples. As one of the most threatening types for practical deep learning systems, physical adversarial examples have received extensive attention in recent years. However, due to the insufficient focus on intrinsic characteristics such as model-agnostic features, existing studies generate adversarial perturbations with unsatisfactory transferability on attacking different models. Motivated by the viewpoint that attention reflects the intrinsic characteristics of the recognition process, we propose the Transferable Attention Attack (TA2) method to generate adversarial camouflages with strong transferable attacking ability by taking advantage of visual attention mechanism, i.e., triplet attention suppression. As for attacking, we generate transferable adversarial camouflages by distracting the model-shared similar attention patterns from the target to non-target regions, therefore promoting the transferable attacking ability. Furthermore, we enhance the attacking ability by converging the model attention of the non-ground-truth class, which exploits the lateral inhibition of visual models and activates the model perception for wrong classes. Besides, considering the visually suspicious appearance, we also introduce human attention to help improve their visual naturalness. We conduct extensive experiments in both the digital and physical worlds for classification tasks and comprehensively investigate the effectiveness of the discovered model attention mechanism, demonstrating that our method outperforms state-of-the-art methods.

Original languageEnglish
Pages (from-to)5084-5100
Number of pages17
JournalInternational Journal of Computer Vision
Volume132
Issue number11
DOIs
StatePublished - Nov 2024

Keywords

  • Human attention evasion
  • Lateral inhibition mechanism
  • Model attention distraction
  • Physical adversarial camouflage

Fingerprint

Dive into the research topics of 'Generate Transferable Adversarial Physical Camouflages via Triplet Attention Suppression'. Together they form a unique fingerprint.

Cite this