Motion Guided Attention Fusion To Recognize Interactions From Videos

Tae Soo Kim, Jonathan Jones, Gregory D. Hager; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 13076-13086

Abstract


We present a dual-pathway approach for recognizing fine-grained interactions from videos. We build on the success of prior dual-stream approaches, but make a distinction between the static and dynamic representations of objects and their interactions explicit by introducing separate motion and object detection pathways. Then, using our new Motion-Guided Attention Fusion module, we fuse the bottom-up features in the motion pathway with features captured from object detections to learn the temporal aspects of an action. We show that our approach can generalize across appearance effectively and recognize actions where an actor interacts with previously unseen objects. We validate our approach using the compositional action recognition task from the Something-Something-v2 dataset where we outperform existing state-of-the-art methods. We also show that our method can generalize well to real world tasks by showing state-of-the-art performance on recognizing humans assembling various IKEA furniture on the IKEA-ASM dataset.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Kim_2021_ICCV, author = {Kim, Tae Soo and Jones, Jonathan and Hager, Gregory D.}, title = {Motion Guided Attention Fusion To Recognize Interactions From Videos}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {13076-13086} }