VLM-MSGraph: Vision Language Model-enabled Multi-hierarchical Scene Graph for robotic assembly

  • Shufei Li
  • , Zhijie Yan
  • , Zuoxu Wang*
  • , Yiping Gao
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Intelligent robotic assembly is becoming a pivotal component of the manufacturing sector, driven by growing demands for flexibility, sustainability, and resilience. Robots in manufacturing environments need perception, decision-making, and manipulation skills to support the flexible production of diverse products. However, traditional robotic assembly systems typically rely on time-consuming training processes specific to fixed settings, lacking generalization and zero-shot learning capabilities. To address these challenges, this paper introduces a Vision Language Model-enabled Multi-hierarchical Scene Graph (VLM-MSGraph) approach for robotic assembly, featuring generalized assembly sequence learning and 3D manipulation in open scenarios. The MSGraph incorporates high-level task planning structured as triplets, organized by multiple VLM agents. At a low level, the MSGraph retains 3D spatial relationships between industrial parts, enabling the robot to perform assembly tasks while accounting for object geometry for effective manipulation. Assembly drawings, physics simulations, and assembly tasks in a laboratory setting are used to evaluate the proposed system, advancing flexible automation in robotics.

Original languageEnglish
Article number102978
JournalRobotics and Computer-Integrated Manufacturing
Volume94
DOIs
StatePublished - Aug 2025

Keywords

  • Flexible automation
  • Robotic assembly
  • Scene graph
  • Smart manufacturing
  • Vision language model

Fingerprint

Dive into the research topics of 'VLM-MSGraph: Vision Language Model-enabled Multi-hierarchical Scene Graph for robotic assembly'. Together they form a unique fingerprint.

Cite this