Understanding Human Gaze Communication by Spatio-Temporal Graph Reasoning

Lifeng Fan, Wenguan Wang, Siyuan Huang, Xinyu Tang, Song-Chun Zhu; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 5724-5733


This paper addresses a new problem of understanding human gaze communication in social videos from both atomic-level and event-level, which is significant for studying human social interactions. To tackle this novel and challenging problem, we contribute a large-scale video dataset, VACATION, which covers diverse daily social scenes and gaze communication behaviors with complete annotations of objects and human faces, human attention, and communication structures and labels in both atomic-level and event-level. Together with VACATION, we propose a spatio-temporal graph neural network to explicitly represent the diverse gaze interactions in the social scenes and to infer atomic-level gaze communication by message passing. We further propose an event network with encoder-decoder structure to predict the event-level gaze communication. Our experiments demonstrate that the proposed model improves various baselines significantly in predicting the atomic-level and event-level gaze communications.

Related Material

author = {Fan, Lifeng and Wang, Wenguan and Huang, Siyuan and Tang, Xinyu and Zhu, Song-Chun},
title = {Understanding Human Gaze Communication by Spatio-Temporal Graph Reasoning},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}