Actor-Transformers for Group Activity Recognition

Kirill Gavrilyuk, Ryan Sanford, Mehrsan Javan, Cees G. M. Snoek; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 839-848

Abstract


This paper strives to recognize individual actions and group activities from videos. While existing solutions for this challenging problem explicitly model spatial and temporal relationships based on location of individual actors, we propose an actor-transformer model able to learn and selectively extract information relevant for group activity recognition. We feed the transformer with rich actor-specific static and dynamic representations expressed by features from a 2D pose network and 3D CNN, respectively. We empirically study different ways to combine these representations and show their complementary benefits. Experiments show what is important to transform and how it should be transformed. What is more, actor-transformers achieve state-of-the-art results on two publicly available benchmarks for group activity recognition, outperforming the previous best published results by a considerable margin

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Gavrilyuk_2020_CVPR,
author = {Gavrilyuk, Kirill and Sanford, Ryan and Javan, Mehrsan and Snoek, Cees G. M.},
title = {Actor-Transformers for Group Activity Recognition},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}