MGTR: End-to-End Mutual Gaze Detection with Transformer

Hang Guo, Zhengxi Hu, Jingtai Liu; Proceedings of the Asian Conference on Computer Vision (ACCV), 2022, pp. 1590-1605

Abstract


People's looking at each other or mutual gaze is ubiquitous in our daily interactions, and detecting mutual gaze is of great significance for understanding human social scenes. Current mutual gaze detection methods focus on two-stage methods, whose inference speed is limited by the two-stage pipeline and the performance in the second stage is affected by the first one. In this paper, we propose a novel one-stage mutual gaze detection framework called Mutual Gaze TRansformer or MGTR to perform mutual gaze detection in an end-to-end manner. By designing mutual gaze instance triples, MGTR can detect each human head bounding box and simultaneously infer mutual gaze relationship based on global image information, which streamlines the whole process with simplicity. Experimental results on two mutual gaze datasets show that our method is able to accelerate mutual gaze detection process without losing performance. Ablation study shows that different components of MGTR can capture different levels of semantic information in images. Code is available at https://github.com/Gmbition/MGTR.

Related Material


[pdf] [supp] [arXiv] [code]
[bibtex]
@InProceedings{Guo_2022_ACCV, author = {Guo, Hang and Hu, Zhengxi and Liu, Jingtai}, title = {MGTR: End-to-End Mutual Gaze Detection with Transformer}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2022}, pages = {1590-1605} }