Skip to main navigation Skip to search Skip to main content

Tasks Reflected in the Eyes: Egocentric Gaze-Aware Visual Task Type Recognition in Virtual Reality

  • Zhimin Wang*
  • , Feng Lu
  • *Corresponding author for this work
  • Beihang University

Research output: Contribution to journalArticlepeer-review

Abstract

With eye tracking finding widespread utility in augmented reality and virtual reality headsets, eye gaze has the potential to recognize users' visual tasks and adaptively adjust virtual content displays, thereby enhancing the intelligence of these headsets. However, current studies on visual task recognition often focus on scene-specific tasks, like copying tasks for office environments, which lack applicability to new scenarios, e.g., museums. In this paper, we propose four scene-agnostic task types for facilitating task type recognition across a broader range of scenarios. We present a new dataset that includes eye and head movement data recorded from 20 participants while they engaged in four task types across 15 360-degree VR videos. Using this dataset, we propose an egocentric gaze-aware task type recognition method, TRCLP, which achieves promising results. Additionally, we illustrate the practical applications of task type recognition with three examples. Our work offers valuable insights for content developers in designing task-aware intelligent applications. Our dataset and source code are available at zhimin-wang.github.io/TaskTypeRecognition.html.

Original languageEnglish
Pages (from-to)7277-7287
Number of pages11
JournalIEEE Transactions on Visualization and Computer Graphics
Volume30
Issue number11
DOIs
StatePublished - 2024

Keywords

  • Virtual reality
  • deep learning
  • eye tracking
  • intelligent application
  • visual task type recognition

Fingerprint

Dive into the research topics of 'Tasks Reflected in the Eyes: Egocentric Gaze-Aware Visual Task Type Recognition in Virtual Reality'. Together they form a unique fingerprint.

Cite this