Monocular Free-Head 3D Gaze Tracking With Deep Learning and Geometry Constraints

Wangjiang Zhu, Haoping Deng; The IEEE International Conference on Computer Vision (ICCV), 2017, pp. 3143-3152

Abstract


Free-head 3D gaze tracking outputs both the eye location and the gaze vector in 3D space, and it has wide applications in scenarios such as driver monitoring, advertisement analysis and surveillance. A reliable and low-cost monocular solution is critical for pervasive usage in these areas. Noticing that a gaze vector is a composition of head pose and eyeball movement in a geometrically deterministic way, we propose a novel gaze transform layer to connect separate head pose and eyeball movement models. The proposed decomposition does not suffer from head-gaze correlation overfitting and makes it possible to use datasets existing for other tasks. To add stronger supervision for better network training, we propose a two-step training strategy, which first trains sub-tasks with rough labels and then jointly trains with accurate gaze labels. To enable good cross-subject performance under various conditions, we collect a large dataset which has full coverage of head poses and eyeball movements, contains 200 subjects, and has diverse illumination conditions. Our deep solution achieves state-of-the-art gaze tracking accuracy, reaching 5.6 degrees cross-subject prediction error using a small network running at 1000 fps on a s ingle CPU (excluding face alignment time) and 4.3 degrees cross-subject error with a deeper network.

Related Material


[pdf]
[bibtex]
@InProceedings{Zhu_2017_ICCV,
author = {Zhu, Wangjiang and Deng, Haoping},
title = {Monocular Free-Head 3D Gaze Tracking With Deep Learning and Geometry Constraints},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}