Action Recognition With Spatial-Temporal Discriminative Filter Banks

Brais Martinez, Davide Modolo, Yuanjun Xiong, Joseph Tighe; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 5482-5491

Abstract


Action recognition has seen a dramatic performance improvement in the last few years. Most of the current state-of-the-art literature either aims at improving performance through changes to the backbone CNN network, or exploring different trade-offs between computational efficiency and performance, again through altering the backbone network. However, almost all of these works maintain the same last layers of the network, which simply consist of a global average pooling followed by a fully connected layer. In this work we focus on how to improve the representation capacity of the network, but rather than altering the backbone, we focus on improving the last layers of the network, where changes have low impact in terms of computational cost. In particular, we hypothesize that current architectures have poor sensitivity to finer details and we exploit recent advances in the fine-grained recognition literature to improve our model in this aspect. With the proposed approach, we obtain state-of-the-art performance on Kinetics-400 and Something-Something-V1, the two major large-scale action recognition benchmarks.

Related Material


[pdf]
[bibtex]
@InProceedings{Martinez_2019_ICCV,
author = {Martinez, Brais and Modolo, Davide and Xiong, Yuanjun and Tighe, Joseph},
title = {Action Recognition With Spatial-Temporal Discriminative Filter Banks},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}