The design of multimodal human-machine interface for teleoperation

Research output: Contribution to journalConference articlepeer-review

Abstract

Teleoperation is a viable alternative to project human operator's intelligence into the places that are inaccessible or dangerous to people, or where expertise and resources are not available. Due to the distance between the human operator and remote environment, the human-machine interface is an important component for the overall system performance capabilities and efficiency. This paper proposes a new design method of multimodal interface for teleoperation. A distributed graphic predictive display subsystem based on virtual reality is implemented, and all kinds of feedback information acquired from remote environment, such as actual live images, audio and force information are organized and presented to human operators in a appropriate way. Experimental results demonstrate that the multimodal human-machine interface can reduce human operator's mental workload and facilitate teleoperation. Some key technologies concerning with this multimodal interface, such as the synchronization mechanism of the distributed predictive simulation subsystem and the real time transmission of actual live multimedia via the Internet under narrow bandwidth are also developed in this paper.

Original languageEnglish
Pages (from-to)3187-3192
Number of pages6
JournalProceedings of the IEEE International Conference on Systems, Man and Cybernetics
Volume5
StatePublished - 2001
Event2001 IEEE International Conference on Systems, Man and Cybernetics - Tucson, AZ, United States
Duration: 7 Oct 200110 Oct 2001

Keywords

  • Distributed predictive display
  • Multimodal human-machine interface
  • Teleoperation

Fingerprint

Dive into the research topics of 'The design of multimodal human-machine interface for teleoperation'. Together they form a unique fingerprint.

Cite this