摘要
Gaze prediction method refers to an algorithm model that predicts the user’s current gaze direction through various types of user information. However, current methods for predicting gaze in virtual scenes typically rely on generalized models and still have considerable room for improvement in specific interactive tasks. This paper focuses on improving gaze prediction for the interactive task flow of finding-locking onto-approaching target objects in virtual scenes. We first construct the first dataset for this task, consisting of gaze recordings, object, helmet and controller parameters, as well as recorded videos, during five interacting tasks performed by 21 users in three interactive scenes. The users’ interaction process is divided into three stages: finding target objects, locking onto target objects, and approaching target objects. We then conduct phase-wise correlation analysis, selecting the parameter set with the highest correlation with gaze to input into the network for training. The proposed method is validated on the self-constructed dataset, achieving a gaze prediction error of 2.60°, which represents a 21.45% improvement over the current SOTA method’s error of 3.31°, significantly enhancing gaze prediction accuracy for this task scenario.
| 投稿的翻译标题 | Staged Gaze Prediction in Virtual Scene Interaction Tasks |
|---|---|
| 源语言 | 繁体中文 |
| 页(从-至) | 207-215 |
| 页数 | 9 |
| 期刊 | Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics |
| 卷 | 37 |
| 期 | 2 |
| DOI | |
| 出版状态 | 已出版 - 2月 2025 |
关键词
- deep learning
- gaze prediction
- human-computer interaction
- virtual reality
指纹
探究 '面向虚拟场景交互任务的分阶段视线预测方法' 的科研主题。它们共同构成独一无二的指纹。引用此
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver