Learning to Personalize in Appearance-Based Gaze Tracking

Erik Linden, Jonas Sjostrand, Alexandre Proutiere; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 0-0

Abstract


Personal variations severely limit the performance of appearance-based gaze tracking. Adapting to these variations using standard neural network model-adaption methods is difficult. The problems range from overfitting, due to small amounts of training data, to underfitting, due to restrictive model architectures. We tackle these problems by introducing SPatial Adaptive GaZe Estimator (SPAZE ). By modeling personal variations as a low-dimensional latent parameter space, SPAZE provides just enough adaptability to capture the range of personal variations without being prone to overfitting. Calibrating SPAZE for a new person reduces to solving a small and simple optimization problem. SPAZE achieves an error of 2.70 degrees on the MPIIGaze dataset, improving on the state-of-the-art by 14 %. We contribute to gaze tracking research by empirically showing that personal variations are well-modeled as a 3-dimensional latent parameter space for each eye. We show that this low-dimensionality is expected by examining model-based approaches to gaze tracking.

Related Material


[pdf]
[bibtex]
@InProceedings{Linden_2019_ICCV,
author = {Linden, Erik and Sjostrand, Jonas and Proutiere, Alexandre},
title = {Learning to Personalize in Appearance-Based Gaze Tracking},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2019}
}