Multi-modal virtual reality dental training system with integrated haptic-visual-audio display

  • Dangxiao Wang*
  • , Yuru Zhang
  • , Zhitao Sun
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Multi-modal signal fusion is an important way to improve immersion and fidelity of virtual reality training systems. With the help of a robotic arm, haptic device, it is possible to supply haptic-visual-audio display for training dentist. Architecture is firstly proposed to integrate different signals in a training system. High efficient sound synthesis algorithm is proposed based on hybrid combination of parameter identification from recorded sound signal, then use signal synthesis method to form final sound signal. Mapping method among audio, haptic and visual signal is proposed based on parameter mapping under various interaction status. Experiment based on Phantom desktop proves the stability of the rendering algorithm and the improved fidelity after adding audio feedback to current haptic-visual dental training system.

Original languageEnglish
Title of host publicationRobotic Welding, Intelligence and Automation
EditorsTzyh-Jong Tarn, Shan-Ben Chen, Changjiu Zhou
Pages453-462
Number of pages10
DOIs
StatePublished - 2007

Publication series

NameLecture Notes in Control and Information Sciences
Volume362
ISSN (Print)0170-8643

Fingerprint

Dive into the research topics of 'Multi-modal virtual reality dental training system with integrated haptic-visual-audio display'. Together they form a unique fingerprint.

Cite this