TY - GEN
T1 - Accurate and fast classification of foot gestures for virtual locomotion
AU - Shi, Xinyu
AU - Pan, Junjun
AU - Hu, Zeyong
AU - Lin, Juncong
AU - Guo, Shihui
AU - Liao, Minghong
AU - Pan, Ye
AU - Liu, Ligang
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/10
Y1 - 2019/10
N2 - This work explores the use of foot gestures for locomotion in virtual environments. Foot gestures are represented as the distribution of plantar pressure and detected by three sparsely-located sensors on each insole. The Long Short-Term Memory model is chosen as the classifier to recognize the performer's foot gesture based on the captured signals of pressure information. The trained classifier directly takes the noisy and sparse input of sensor data, and handles seven categories of foot gestures (stand, walk forward/backward, run, jump, slide left and right) without manual definition of signal features for classifying these gestures. This classifier is capable of recognizing the foot gestures, even with the existence of large sensor-specific, inter-person and intra-person variations. Results show that an accuracy of ~80% can be achieved across different users with different shoe sizes and ~85% for users with the same shoe size. A novel method, Dual-Check Till Consensus, is proposed to reduce the latency of gesture recognition from 2 seconds to 0.5 seconds and increase the accuracy to over 97%. This method offers a promising solution to achieve lower latency and higher accuracy at a minor cost of computation workload. The characteristics of high accuracy and fast classification of our method could lead to wider applications of using foot patterns for human-computer interaction.
AB - This work explores the use of foot gestures for locomotion in virtual environments. Foot gestures are represented as the distribution of plantar pressure and detected by three sparsely-located sensors on each insole. The Long Short-Term Memory model is chosen as the classifier to recognize the performer's foot gesture based on the captured signals of pressure information. The trained classifier directly takes the noisy and sparse input of sensor data, and handles seven categories of foot gestures (stand, walk forward/backward, run, jump, slide left and right) without manual definition of signal features for classifying these gestures. This classifier is capable of recognizing the foot gestures, even with the existence of large sensor-specific, inter-person and intra-person variations. Results show that an accuracy of ~80% can be achieved across different users with different shoe sizes and ~85% for users with the same shoe size. A novel method, Dual-Check Till Consensus, is proposed to reduce the latency of gesture recognition from 2 seconds to 0.5 seconds and increase the accuracy to over 97%. This method offers a promising solution to achieve lower latency and higher accuracy at a minor cost of computation workload. The characteristics of high accuracy and fast classification of our method could lead to wider applications of using foot patterns for human-computer interaction.
KW - Gestural input
KW - Human centered computing
KW - Human computer interaction (HCI)
KW - Interaction techniques
KW - Interactive systems and tools
KW - User interface programming
UR - https://www.scopus.com/pages/publications/85078269610
U2 - 10.1109/ISMAR.2019.000-6
DO - 10.1109/ISMAR.2019.000-6
M3 - 会议稿件
AN - SCOPUS:85078269610
T3 - Proceedings - 2019 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2019
SP - 178
EP - 189
BT - Proceedings - 2019 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 18th IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2019
Y2 - 14 October 2019 through 18 October 2019
ER -