-
[pdf]
[supp]
[bibtex]@InProceedings{Wang_2023_CVPR, author = {Wang, Hengfei and Oh, Jun O. and Chang, Hyung Jin and Na, Jin Hee and Tae, Minwoo and Zhang, Zhongqun and Choi, Sang-Il}, title = {GazeCaps: Gaze Estimation With Self-Attention-Routed Capsules}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {2669-2677} }
GazeCaps: Gaze Estimation With Self-Attention-Routed Capsules
Abstract
Gaze estimation is the task of estimating eye gaze from facial features. People tend to infer gaze by considering different facial properties from the whole image and their relations. However, existing methods rarely consider these various properties. In this paper, we propose a novel GazeCaps framework that represents various facial properties as different capsules. The capsules respond sensitively to transforms of facial properties by vectorial expression, which is effective for gaze estimation in which many facial components are nonlinearly transformed according to the direction of the head in addition to the perspective. Furthermore, we propose a Self-Attention Routing (SAR) module which can dynamically allocate attention to different capsules that contain important information and can be optimized as a single process without iterations. Through rigorous experiments, we confirm that the proposed method achieves state-of-the-art performance on various benchmarks. We also detail the generalization performance of the proposed model through a cross-dataset evaluation.
Related Material