Appearance-Based Gaze Estimation Using Attention and Difference Mechanism

Murthy L R D, Pradipta Biswas; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2021, pp. 3143-3152

Abstract


Appearance-based gaze estimation problem received wide attention over the past few years. Even though model-based approaches existed earlier, availability of large datasets and novel deep learning techniques made appearance-based methods achieve superior accuracy than model-based approaches. In this paper, we proposed two novel techniques to improve gaze estimation accuracy. Our first approach, I2D-Net uses a difference layer to eliminate any common features from left and right eyes of a subject that are not pertinent to gaze estimation task. Our second approach, AGE-Net adapted the idea of attentionmechanism and assigns weights to the features extracted from eye images. I2D-Net performed on par with the existing state-of-the-art approaches while AGE-Net reported state-of-the-art accuracy of 4.09 and 7.44 degree error on MPIIGaze and RT-Gene datasets respectively. We performed ablation studies to understand the effectiveness of the proposed approaches followed by analysis of gaze error distribution with respect to various factors of MPIIGaze dataset.

Related Material


[pdf]
[bibtex]
@InProceedings{D_2021_CVPR, author = {D, Murthy L R and Biswas, Pradipta}, title = {Appearance-Based Gaze Estimation Using Attention and Difference Mechanism}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {3143-3152} }