Jailbreak Vision Language Models via Bi-Modal Adversarial Prompt

  • Zonghao Ying
  • , Aishan Liu*
  • , Tianyuan Zhang
  • , Zhengmin Yu
  • , Siyuan Liang
  • , Xianglong Liu
  • , Dacheng Tao
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In the realm of large vision language models (LVLMs), jailbreak attacks serve as a red-teaming approach to bypass guardrails and uncover safety implications. Existing jailbreaks predominantly focus on the visual modality, perturbing solely visual inputs in the prompt for attacks. However, they fall short when confronted with aligned models that fuse visual and textual features simultaneously for generation. To address this limitation, this paper introduces the Bi-Modal Adversarial Prompt Attack (BAP), which executes jailbreaks by optimizing textual and visual prompts cohesively. Initially, we adversarially embed universally adversarial perturbations in an image, guided by a few-shot query-agnostic corpus (e.g., affirmative prefixes and negative inhibitions). This process ensures that the adversarial image prompt LVLMs to respond positively to harmful queries. Subsequently, leveraging the image, we optimize textual prompts with specific harmful intent. In particular, we utilize a large language model to analyze jailbreak failures and employ chain-of-thought reasoning to refine textual prompts through a feedback-iteration manner. To validate the efficacy of our approach, we conducted extensive evaluations on various datasets and LVLMs, demonstrating that our BAP significantly outperforms other methods by large margins (+29.03% in attack success rate on average). Additionally, we showcase the potential of our attacks on black-box commercial LVLMs, such as GPT-4o and Gemini.

Original languageEnglish
Pages (from-to)7153-7165
Number of pages13
JournalIEEE Transactions on Information Forensics and Security
Volume20
DOIs
StatePublished - 2025

Keywords

  • Large vision language models (LVLMs)
  • adversarial perturbation
  • chain-of-thought
  • jailbreak attack
  • red-teaming

Fingerprint

Dive into the research topics of 'Jailbreak Vision Language Models via Bi-Modal Adversarial Prompt'. Together they form a unique fingerprint.

Cite this