跳到主要导航 跳到搜索 跳到主要内容

From Local Details to Global Context: Advancing Vision-Language Models with Attention-Based Selection

  • Lincan Cai
  • , Jingxuan Kang
  • , Shuang Li*
  • , Wenxuan Ma
  • , Binhui Xie
  • , Zhida Qin
  • , Jian Liang
  • *此作品的通讯作者

科研成果: 期刊稿件会议文章同行评审

摘要

Pretrained vision-language models (VLMs), e.g., CLIP, demonstrate impressive zero-shot capabilities on downstream tasks. Prior research highlights the crucial role of visual augmentation techniques, like random cropping, in alignment with fine-grained class descriptions generated by large language models (LLMs), significantly enhancing zero-shot performance by incorporating multi-view information. However, the inherent randomness of these augmentations can inevitably introduce background artifacts and cause models to overly focus on local details, compromising global semantic understanding. To address these issues, we propose an Attention-Based Selection (ABS) method from local details to global context, which applies attention-guided cropping in both raw images and feature space, supplement global semantic information through strategic feature selection. Additionally, we introduce a soft matching technique to effectively filter LLM descriptions for better alignment. ABS achieves state-of-the-art performance on out-of-distribution generalization and zero-shot classification tasks. Notably, ABS is training-free and even rivals few-shot and test-time adaptation methods. Our code is available at https://github.com/BIT-DA/ABS.

源语言英语
页(从-至)6229-6242
页数14
期刊Proceedings of Machine Learning Research
267
出版状态已出版 - 2025
活动42nd International Conference on Machine Learning, ICML 2025 - Vancouver, 加拿大
期限: 13 7月 202519 7月 2025

指纹

探究 'From Local Details to Global Context: Advancing Vision-Language Models with Attention-Based Selection' 的科研主题。它们共同构成独一无二的指纹。

引用此