Skip to main navigation Skip to search Skip to main content

Object semantic grid mapping with 2D LiDAR and RGB-D camera for domestic robot navigation

  • Xianyu Qi
  • , Wei Wang*
  • , Ziwei Liao
  • , Xiaoyu Zhang
  • , Dongsheng Yang
  • , Ran Wei
  • *Corresponding author for this work
  • Beihang University
  • Ltd.

Research output: Contribution to journalArticlepeer-review

Abstract

Occupied grid maps are sufficient for mobile robots to complete metric navigation tasks in domestic environments. However, they lack semantic information to endow the robots with the ability of social goal selection and human-friendly operation modes. In this paper, we propose an object semantic grid mapping system with 2D Light Detection and Ranging (LiDAR) and RGB-D sensors to solve this problem. At first, we use a laser-based Simultaneous Localization and Mapping (SLAM) to generate an occupied grid map and obtain a robot trajectory. Then, we employ object detection to get an object's semantics of color images and use joint interpolation to refine camera poses. Based on object detection, depth images, and interpolated poses, we build a point cloud with object instances. To generate object-oriented minimum bounding rectangles, we propose a method for extracting the dominant directions of the room. Furthermore, we build object goal spaces to help the robots select navigation goals conveniently and socially. We have used the Robot@Home dataset to verify the system; the verification results show that our system is effective.

Original languageEnglish
Article number5782
JournalApplied Sciences (Switzerland)
Volume10
Issue number17
DOIs
StatePublished - Sep 2020

Keywords

  • 2D LiDAR
  • Domestic navigation
  • Object semantic grid map
  • RGB-D camera

Fingerprint

Dive into the research topics of 'Object semantic grid mapping with 2D LiDAR and RGB-D camera for domestic robot navigation'. Together they form a unique fingerprint.

Cite this