Part-Activated Deep Reinforcement Learning for Action Prediction
Lei Chen, Jiwen Lu, Zhanjie Song, Jie Zhou; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 421-436
Abstract
In this paper, we propose a part-activated deep reinforcement learning (PA-DRL) for action prediction. Most existing methods for action prediction utilize the evolution of whole frames to model actions, which cannot avoid the noise of the current action, especially in the early prediction. Moreover, the loss of structural information of human body diminishes the capacity of features to describe actions. To address this, we design a PA-DRL to exploit the structure of the human body by extracting skeleton proposals under a deep reinforcement learning framework. Specifically, we extract features from different parts of the human body individually and activate the action-related parts in features to enhance the representation. Our method not only exploits the structure information of the human body, but also considers the saliency part for expressing actions. We evaluate our method on three popular action prediction datasets: UT-Interaction, BIT-Interaction and UCF101. Our experimental results demonstrate that our method achieves the performance with state-of-the-arts.
Related Material
[pdf]
[
bibtex]
@InProceedings{Chen_2018_ECCV,
author = {Chen, Lei and Lu, Jiwen and Song, Zhanjie and Zhou, Jie},
title = {Part-Activated Deep Reinforcement Learning for Action Prediction},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}