Recognize Actions by Disentangling Components of Dynamics

Yue Zhao, Yuanjun Xiong, Dahua Lin; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 6566-6575

Abstract


Despite the remarkable progress in action recognition over the past several years, existing methods remain limited in efficiency and effectiveness. The methods treating appearance and motion as separate streams are usually subject to the cost of optical flow computation, while those relying on 3D convolution on the original video frames often yield inferior performance in practice. In this paper, we propose a new ConvNet architecture for video representation learning, which can derive disentangled components of dynamics purely from raw video frames, without the need of optical flow estimation. Particularly, the learned representation comprises three components for representing static appearance, apparent motion, and appearance changes. We introduce 3D pooling, cost volume processing, and warped feature differences, respectively for extracting the three components above. These modules are incorporated as three branches in our unified network, which share the underlying features and are learned jointly in an end-to-end manner. On two large datasets UCF101 and Kinetics our method obtained competitive performances with high efficiency, using only the RGB frame sequence as input.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Zhao_2018_CVPR,
author = {Zhao, Yue and Xiong, Yuanjun and Lin, Dahua},
title = {Recognize Actions by Disentangling Components of Dynamics},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}