-
[pdf]
[arXiv]
[bibtex]@InProceedings{Tomas_2021_CVPR, author = {Tomas, Henri and Reyes, Marcus and Dionido, Raimarc and Ty, Mark and Mirando, Jonric and Casimiro, Joel and Atienza, Rowel and Guinto, Richard}, title = {GOO: A Dataset for Gaze Object Prediction in Retail Environments}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {3125-3133} }
GOO: A Dataset for Gaze Object Prediction in Retail Environments
Abstract
One of the most fundamental and information-laden actions humans do is to look at objects. However, a survey of current works reveals that existing gaze-related datasets annotate only the pixel being looked at, and not the boundaries of a specific object of interest. This lack of object annotation presents an opportunity for further advancing gaze estimation research. To this end, we present a challenging new task called gaze object prediction, where the goal is to predict a bounding box for a person's gazed-at object. To train and evaluate gaze networks on this task, we present the Gaze On Objects (GOO) dataset. GOO is composed of a large set of synthetic images (GOO-Synth) supplemented by a smaller subset of real images (GOO-Real) of people looking at objects in a retail environment. Our work establishes extensive baselines on GOO by re-implementing and evaluating selected state-of-the-art models on the task of gaze following and domain adaptation. Code is available on github.
Related Material