LiDAR-ToF-Binocular depth fusion using gradient priors

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Most robotic systems face a complex environment in which a single vision sensor cannot fully sense surroundings. In this paper, we focus on how to combining the depth image of traditional binocular camera, novel ToF (time-of-flight) camera and emerging 16-line LiDAR (light detection and ranging), to accurately obtain a dense depth image. In order to unify the depth image to the same perspective of different sensors, we employ a simple method for extrinsic parameter calibration. Based on the unified depth image, a fast and accurate fusion algorithm is developed. Our experiments illustrate that the proposed method can greatly improve the depth density and accuracy, while keeping a fast running speed.

Original languageEnglish
Title of host publicationProceedings of the 32nd Chinese Control and Decision Conference, CCDC 2020
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages2024-2029
Number of pages6
ISBN (Electronic)9781728158549
DOIs
StatePublished - Aug 2020
Event32nd Chinese Control and Decision Conference, CCDC 2020 - Hefei, China
Duration: 22 Aug 202024 Aug 2020

Publication series

NameProceedings of the 32nd Chinese Control and Decision Conference, CCDC 2020

Conference

Conference32nd Chinese Control and Decision Conference, CCDC 2020
Country/TerritoryChina
CityHefei
Period22/08/2024/08/20

Keywords

  • Binocular camera
  • Depth image
  • Fusion
  • LiDAR
  • Time of flight camera

Fingerprint

Dive into the research topics of 'LiDAR-ToF-Binocular depth fusion using gradient priors'. Together they form a unique fingerprint.

Cite this