Efficient Action Recognition via Dynamic Knowledge Propagation

Hanul Kim, Mihir Jain, Jun-Tae Lee, Sungrack Yun, Fatih Porikli; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 13719-13728

Abstract


Efficient action recognition has become crucial to extend the success of action recognition to many real-world applications. Contrary to most existing methods, which mainly focus on selecting salient frames to reduce the computation cost, we focus more on making the most of the selected frames. To this end, we employ two networks of different capabilities that operate in tandem to efficiently recognize actions. Given a video, the lighter network processes more frames while the heavier one only processes a few. In order to enable the effective interaction between the two, we propose dynamic knowledge propagation based on a cross-attention mechanism. This is the main component of our framework that is essentially a student-teacher architecture, but as the teacher model continues to interact with the student model during inference, we call it a dynamic student-teacher framework. Through extensive experiments, we demonstrate the effectiveness of each component of our framework. Our method outperforms competing state-of-the-art methods on two video datasets: ActivityNet-v1.3 and Mini-Kinetics.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Kim_2021_ICCV, author = {Kim, Hanul and Jain, Mihir and Lee, Jun-Tae and Yun, Sungrack and Porikli, Fatih}, title = {Efficient Action Recognition via Dynamic Knowledge Propagation}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {13719-13728} }