Skip to main navigation Skip to search Skip to main content

Teaching robots to do object assembly using multi-modal 3D vision

  • Weiwei Wan*
  • , Feng Lu
  • , Zepei Wu
  • , Kensuke Harada
  • *Corresponding author for this work
  • National Institute of Advanced Industrial Science and Engineering
  • The University of Osaka

Research output: Contribution to journalArticlepeer-review

Abstract

The motivation of this paper is to develop an intelligent robot assembly system using multi-modal vision for next-generation industrial assembly. The system includes two phases where in the first phase human beings demonstrate assembly to robots and in the second phase robots detect objects, plan grasps, and assemble objects following human demonstration using AI searching. A notorious difficulty to implement such a system is the bad precision of 3D visual detection. This paper presents multi-modal approaches to overcome the difficulty: It uses AR markers in the teaching phase to detect human operation, and uses point clouds and geometric constraints in the robot execution phase to avoid unexpected occlusion and noises. The paper presents several experiments to examine the precision and correctness of the approaches. It demonstrates the applicability of the approaches by integrating them with graph model-based motion planning, and by executing the results on industrial robots in real-world scenarios.

Original languageEnglish
Pages (from-to)85-93
Number of pages9
JournalNeurocomputing
Volume259
DOIs
StatePublished - 11 Oct 2017

Keywords

  • 3D visual detection
  • Motion planning
  • Robot manipulation

Fingerprint

Dive into the research topics of 'Teaching robots to do object assembly using multi-modal 3D vision'. Together they form a unique fingerprint.

Cite this