Predicting the Future: A Jointly Learnt Model for Action Anticipation

Harshala Gammulle, Simon Denman, Sridha Sridharan, Clinton Fookes; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 5562-5571

Abstract


Inspired by human neurological structures for action anticipation, we present an action anticipation model that enables the prediction of plausible future actions by forecasting both the visual and temporal future. In contrast to current state-of-the-art methods which first learn a model to predict future video features and then perform action anticipation using these features, the proposed framework jointly learns to perform the two tasks, future visual and temporal representation synthesis, and early action anticipation. The joint learning framework ensures that the predicted future embeddings are informative to the action anticipation task. Furthermore, through extensive experimental evaluations we demonstrate the utility of using both visual and temporal semantics of the scene, and illustrate how this representation synthesis could be achieved through a recurrent Generative Adversarial Network (GAN) framework. Our model outperforms the current state-of-the-art methods on multiple datasets: UCF101, UCF101-24, UT-Interaction and TV Human Interaction.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Gammulle_2019_ICCV,
author = {Gammulle, Harshala and Denman, Simon and Sridharan, Sridha and Fookes, Clinton},
title = {Predicting the Future: A Jointly Learnt Model for Action Anticipation},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}