跳到主要导航 跳到搜索 跳到主要内容

面向虚拟场景交互任务的分阶段视线预测方法

  • Guoan Li
  • , Junchen Liu
  • , Miao Wang*
  • *此作品的通讯作者

科研成果: 期刊稿件文章同行评审

摘要

Gaze prediction method refers to an algorithm model that predicts the user’s current gaze direction through various types of user information. However, current methods for predicting gaze in virtual scenes typically rely on generalized models and still have considerable room for improvement in specific interactive tasks. This paper focuses on improving gaze prediction for the interactive task flow of finding-locking onto-approaching target objects in virtual scenes. We first construct the first dataset for this task, consisting of gaze recordings, object, helmet and controller parameters, as well as recorded videos, during five interacting tasks performed by 21 users in three interactive scenes. The users’ interaction process is divided into three stages: finding target objects, locking onto target objects, and approaching target objects. We then conduct phase-wise correlation analysis, selecting the parameter set with the highest correlation with gaze to input into the network for training. The proposed method is validated on the self-constructed dataset, achieving a gaze prediction error of 2.60°, which represents a 21.45% improvement over the current SOTA method’s error of 3.31°, significantly enhancing gaze prediction accuracy for this task scenario.

投稿的翻译标题Staged Gaze Prediction in Virtual Scene Interaction Tasks
源语言繁体中文
页(从-至)207-215
页数9
期刊Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics
37
2
DOI
出版状态已出版 - 2月 2025

关键词

  • deep learning
  • gaze prediction
  • human-computer interaction
  • virtual reality

指纹

探究 '面向虚拟场景交互任务的分阶段视线预测方法' 的科研主题。它们共同构成独一无二的指纹。

引用此