TY - GEN
T1 - Comparing Single-modal and Multimodal Interaction in an Augmented Reality System
AU - Wang, Zhimin
AU - Yu, Huangyue
AU - Wang, Haofei
AU - Wang, Zongji
AU - Lu, Feng
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/11
Y1 - 2020/11
N2 - Multimodal interaction is expected to offer better user experience in Augmented Reality (AR), and thus becomes a recent research focus. However, due to the lack of hardware-level support, most existing works only combine two modalities at a time, e.g., gesture and speech. Gaze-based interaction techniques have been explored for the screen-based application, but rarely been used in AR systemsy configurable augmented reality system. In this paper, we propose a multimodal interactive system that integrates gaze, gesture and speech in a flexibly configurable augmented reality system. Our lightweight head-mounted device supports accurate gaze tracking, hand gesture recognition and speech recognition simultaneously. More importantly, the system can be easily configured into different modality combinations to study the effects of different interaction techniques. We evaluated the system in the table lamps scenario, and compared the performance of different interaction techniques. The experimental results show that the Gaze+Gesture+Speech is superior in terms of performance.
AB - Multimodal interaction is expected to offer better user experience in Augmented Reality (AR), and thus becomes a recent research focus. However, due to the lack of hardware-level support, most existing works only combine two modalities at a time, e.g., gesture and speech. Gaze-based interaction techniques have been explored for the screen-based application, but rarely been used in AR systemsy configurable augmented reality system. In this paper, we propose a multimodal interactive system that integrates gaze, gesture and speech in a flexibly configurable augmented reality system. Our lightweight head-mounted device supports accurate gaze tracking, hand gesture recognition and speech recognition simultaneously. More importantly, the system can be easily configured into different modality combinations to study the effects of different interaction techniques. We evaluated the system in the table lamps scenario, and compared the performance of different interaction techniques. The experimental results show that the Gaze+Gesture+Speech is superior in terms of performance.
KW - AR system
KW - augmented reality
KW - gaze
KW - gesture
KW - multimodal interaction
KW - speech
UR - https://www.scopus.com/pages/publications/85099604867
U2 - 10.1109/ISMAR-Adjunct51615.2020.00052
DO - 10.1109/ISMAR-Adjunct51615.2020.00052
M3 - 会议稿件
AN - SCOPUS:85099604867
T3 - Adjunct Proceedings of the 2020 IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2020
SP - 165
EP - 166
BT - Adjunct Proceedings of the 2020 IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2020
Y2 - 9 November 2020 through 13 November 2020
ER -