Action Anticipation with RBF Kernelized Feature Mapping RNN
Yuge Shi, Basura Fernando, Richard Hartley; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 301-317
Abstract
We introduce a novel Recurrent Neural Network-based algorithm for future video feature generation and action anticipation called
ame. Our novel RNN architecture builds upon three effective principles of machine learning, 1. parameter sharing, 2. Radial basis function kernels and 3. Adversarial training. Using only a fraction of earliest frames of a video, we are able to generate accurate future features thanks to the generelization capacity of our novel RNN. Using a simple two layered MLP facilitated with a RBF kernel layer, we classify generated future features for the action anticipation. In our experiments, we obtain 18% improvement on JHMDB-21 dataset, 6% on UCF101-24 and 13% improvement on UT-Interaction datasets over prior state-of-the-art for action anticipation.
Related Material
[pdf]
[arXiv]
[
bibtex]
@InProceedings{Shi_2018_ECCV,
author = {Shi, Yuge and Fernando, Basura and Hartley, Richard},
title = {Action Anticipation with RBF Kernelized Feature Mapping RNN},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}