Skip to main navigation Skip to search Skip to main content

Progressive Semantic-Visual Alignment and Refinement for Vision-Language Tracking

  • Yanjie Liang
  • , Qiangqiang Wu
  • , Lin Cheng
  • , Changqun Xia
  • , Jia Li*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In recent years, vision-language tracking has drawn emerging attention in the tracking field. The critical challenge for the task is to fuse semantic representations of language information and visual representations of vision information. For this purpose, several vision-language tracking methods perform early or late fusion to fuse visual and semantic features. However, these methods cannot take full advantage of the transformer architecture to excavate useful cross-modal context at various levels. To this end, we propose a new progressive joint vision-language transformer (PJVLT) to progressively align and refine visual embedding with semantic embedding for vision-language tracking. Specifically, to align visual signals with semantic signals, we propose to insert a semantic-aware instance encoder layer (SAIEL) into each intermediate layer of transformer encoder to perform progressive alignment of visual and semantic features. Furthermore, to highlight the multi-modal feature channels and patches corresponding to target objects, we propose a unified channel communication patch interaction layer (CCPIL), which is plugged into each intermediate layer of transformer encoder to progressively activate target-aware channels and patches of aligned multi-modal features for fine-grained tracking. In general, by progressively aligning and refining visual features with semantic features in the transformer encoder, our PJVLT can adaptively excavate well-aligned vision-language context at coarse-to-fine levels, therefore highlighting target objects at various levels for more discriminative tracking. Experiments on several tracking datasets show that the proposed PJVLT can achieve favorable performance in comparison with both conventional trackers and other vision-language trackers.

Original languageEnglish
Pages (from-to)4271-4286
Number of pages16
JournalIEEE Transactions on Circuits and Systems for Video Technology
Volume35
Issue number5
DOIs
StatePublished - 2025

Keywords

  • Vision-language tracking
  • channel communication patch interaction
  • progressive joint vision-language transformer
  • semantic-aware instance encoder

Fingerprint

Dive into the research topics of 'Progressive Semantic-Visual Alignment and Refinement for Vision-Language Tracking'. Together they form a unique fingerprint.

Cite this