Skip to main navigation Skip to search Skip to main content

GLAFE: A Global-Local Feature Learning Self-Attention Encoder for UAV Relocalization in Weak-Texture Environments

  • Yuan Chen
  • , Jie Jiang*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

When uncrewed aerial vehicles (UAVs) conduct exploration tasks in weakly textured environments, such as planetary surfaces or outdoor scenes with sparse features, the absence of GPS typically necessitates the use of visual SLAM for localization. However, feature sparsity, motion blur caused by rapid camera movements, and viewpoint variations often lead to failures in feature-based pose tracking and relocalization. To address this issue, we propose a Global-Local feature learning Self-Attention Encoder (GLAFE), which simultaneously generates enhanced local and global feature descriptors by exploiting the correlations between local features, thereby improving robustness and efficiency in weakly textured scenes with viewpoint changes. A multi-objective optimization strategy based on shared samples is proposed to facilitate the joint learning of global and local features for GLAFE. Experiments on simulated Mars surface images and real-world flight data demonstrate that the proposed approach achieves better comprehensive performance in terms of robustness, accuracy, and efficiency compared with classical retrieval-based and other deep learning methods.

Original languageEnglish
Pages (from-to)5151-5157
Number of pages7
JournalIEEE Robotics and Automation Letters
Volume11
Issue number4
DOIs
StatePublished - 2026

Keywords

  • Relocalization
  • Transformers
  • pose estimation
  • uncrewed aerial vehicles
  • weak-texture

Fingerprint

Dive into the research topics of 'GLAFE: A Global-Local Feature Learning Self-Attention Encoder for UAV Relocalization in Weak-Texture Environments'. Together they form a unique fingerprint.

Cite this