Skip to main navigation Skip to search Skip to main content

ViTT: Vision transformer tracker

  • Xiaoning Zhu
  • , Yannan Jia
  • , Sun Jian*
  • , Lize Gu
  • , Zhang Pu
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

This paper presents a new model for multi‐object tracking (MOT) with a transformer. MOT is a spatiotemporal correlation task among interest objects and one of the crucial technologies of multi‐unmanned aerial vehicles (Multi‐UAV). The transformer is a self‐attentional codec archi-tecture that has been successfully used in natural language processing and is emerging in computer vision. This study proposes the Vision Transformer Tracker (ViTT), which uses a transformer encoder as the backbone and takes images directly as input. Compared with convolution networks, it can model global context at every encoder layer from the beginning, which addresses the challenges of occlusion and complex scenarios. The model simultaneously outputs object locations and corre-sponding appearance embeddings in a shared network through multi‐task learning. Our work demonstrates the superiority and effectiveness of transformer‐based networks in complex computer vision tasks and paves the way for applying the pure transformer in MOT. We evaluated the proposed model on the MOT16 dataset, achieving 65.7% MOTA, and obtained a competitive result compared with other typical multi‐object trackers.

Original languageEnglish
Article number5608
JournalSensors
Volume21
Issue number16
DOIs
StatePublished - 2 Aug 2021

Keywords

  • Attention
  • Backbone
  • MOT
  • Transformer

Fingerprint

Dive into the research topics of 'ViTT: Vision transformer tracker'. Together they form a unique fingerprint.

Cite this