AMTnet: Action-Micro-Tube Regression by End-To-End Trainable Deep Architecture

Suman Saha, Gurkirt Singh, Fabio Cuzzolin; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 4414-4423

Abstract


Dominant approaches to action detection can only provide sub-optimal solutions to the problem, as they rely on seeking frame-level detections, to later compose them into "action tubes" in a post-processing step. With this paper we radically depart from current practice, and take a first step towards the design and implementation of a deep network architecture able to classify and regress whole video subsets, so providing a truly optimal solution of the action detection problem. In this work, in particular, we propose a novel deep net framework able to regress and classify 3D region proposals spanning two successive video frames, whose core is an evolution of classical region proposal networks (RPNs). As such, our 3D-RPN net is able to effectively encode the temporal aspect of actions by purely exploiting appearance, as opposed to methods which heavily rely on expensive flow maps. The proposed model is end-to-end trainable and can be jointly optimised for action localisation and classification in a single step. At test time the network predicts "micro-tubes" encompassing two successive frames, which are linked up into complete action tubes via a new algorithm which exploits the temporal encoding learned by the network and cuts computation time by 50%. Promising results on the J-HMDB-21 and UCF-101 action detection datasets show that our model does outperform the state-of-the-art when relying purely on appearance.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Saha_2017_ICCV,
author = {Saha, Suman and Singh, Gurkirt and Cuzzolin, Fabio},
title = {AMTnet: Action-Micro-Tube Regression by End-To-End Trainable Deep Architecture},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}