-
[pdf]
[arXiv]
[bibtex]@InProceedings{Ghosh_2022_WACV, author = {Ghosh, Shreya and Hayat, Munawar and Dhall, Abhinav and Knibbe, Jarrod}, title = {MTGLS: Multi-Task Gaze Estimation With Limited Supervision}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2022}, pages = {3223-3234} }
MTGLS: Multi-Task Gaze Estimation With Limited Supervision
Abstract
Robust gaze estimation is a challenging task, even for deep CNNs, due to the non-availability of large-scale labeled data. Moreover, gaze annotation is a time-consuming process and requires specialized hardware setups. We propose MTGLS: a Multi-Task Gaze estimation framework with Limited Supervision, which leverages abundantly available non-annotated facial image data. MTGLS distills knowledge from off-the-shelf facial image analysis models, and learns strong feature representations of human eyes, guided by three complementary auxiliary signals: (a) the line of sight of the pupil (i.e. pseudo-gaze) defined by the localized facial landmarks, (b) the head-pose given by Euler angles, and (c) the orientation of the eye patch (left/right eye). To overcome inherent noise in the supervisory signals, MTGLS further incorporates a noise distribution modelling approach. Our experimental results show that MTGLS learns highly generalized representations which consistently perform well on a range of datasets. Our proposed framework outperforms the unsupervised state-of-the-art on CAVE (by approx. 6.43%) and even supervised state-of-the-art methods on Gaze360 (by approx. 6.59%) datasets.
Related Material