摘要
Gaze estimation aims to accurately estimate the direction or position at which a person is looking. With the development of deep learning techniques, a number of gaze estimation methods have been proposed and achieved state-of-the-art performance. However, these methods are limited to within-dataset settings, whose performance drops when tested on unseen datasets. We argue that this is caused by infinite and continuous gaze labels. To alleviate this problem, we propose using gaze frontalization as an auxiliary task to constrain gaze estimation. Based on this, we propose a novel gaze domain generalization framework named Gaze Frontalization-based Auxiliary Learning (GFAL) Framework which embeds the gaze frontalization process, i.e., guiding the feature so that the eyeball can rotate and look at the front (camera), without any target domain information during training. Experimental results show that our proposed framework is able to achieve state-of-the-art performance on gaze domain generalization task, which is competitive with or even superior to the SOTA gaze unsupervised domain adaptation methods.
| 源语言 | 英语 |
|---|---|
| 页(从-至) | 6333-6341 |
| 页数 | 9 |
| 期刊 | Proceedings of the AAAI Conference on Artificial Intelligence |
| 卷 | 38 |
| 期 | 6 |
| DOI | |
| 出版状态 | 已出版 - 25 3月 2024 |
| 活动 | 38th AAAI Conference on Artificial Intelligence, AAAI 2024 - Vancouver, 加拿大 期限: 20 2月 2024 → 27 2月 2024 |
指纹
探究 'Gaze from Origin: Learning for Generalized Gaze Estimation by Embedding the Gaze Frontalization Process' 的科研主题。它们共同构成独一无二的指纹。引用此
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver