Progressively Parsing Interactional Objects for Fine Grained Action Detection
Bingbing Ni, Xiaokang Yang, Shenghua Gao; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 1020-1028
Abstract
Fine grained video action analysis often requires reliable detection and tracking of various interacting objects and human body parts, denoted as interactional object parsing. However, most of the previous methods based on either independent or joint object detection might suffer from high model complexity and challenging image content, e.g., illumination/pose/appearance/scale variation, motion, occlusion etc. In this work, we propose an end-to-end system based on recursive neural network to perform frame by frame interactional object parsing, which can alleviate the difficulty through a incremental manner. Our key innovation is that: instead of jointly outputting all object detections at once, for each frame, we use a set of long-short term memory (LSTM) nodes to incrementally refine the detections. After passing each LSTM node, more object detections are consolidated and thus more contextual information could be utilized to determine more difficult object detections. Extensive experiments on two benchmark fine grained activity datasets demonstrate that our proposed algorithm achieves better interacting object detection performance, which in turn boosts the action recognition performance over the state-of-the-art.
Related Material
[pdf]
[
bibtex]
@InProceedings{Ni_2016_CVPR,
author = {Ni, Bingbing and Yang, Xiaokang and Gao, Shenghua},
title = {Progressively Parsing Interactional Objects for Fine Grained Action Detection},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2016}
}