Robust and efficient edge-based visual odometry

  • Feihu Yan
  • , Zhaoxin Li
  • , Zhong Zhou*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Visual odometry, which aims to estimate relative camera motion between sequential video frames, has been widely used in the fields of augmented reality, virtual reality, and autonomous driving. However, it is still quite challenging for state-of-the-art approaches to handle low-texture scenes. In this paper, we propose a robust and efficient visual odometry algorithm that directly utilizes edge pixels to track camera pose. In contrast to direct methods, we choose reprojection error to construct the optimization energy, which can effectively cope with illumination changes. The distance transform map built upon edge detection for each frame is used to improve tracking efficiency. A novel weighted edge alignment method together with sliding window optimization is proposed to further improve the accuracy. Experiments on public datasets show that the method is comparable to state-of-the-art methods in terms of tracking accuracy, while being faster and more robust. [Figure not available: see fulltext.]

Original languageEnglish
Pages (from-to)467-481
Number of pages15
JournalComputational Visual Media
Volume8
Issue number3
DOIs
StatePublished - Sep 2022

Keywords

  • distance transform
  • edge structure
  • low-texture
  • visual odometry (VO)

Fingerprint

Dive into the research topics of 'Robust and efficient edge-based visual odometry'. Together they form a unique fingerprint.

Cite this