Can Humans Fly? Action Understanding With Multiple Classes of Actors

Chenliang Xu, Shao-Hang Hsieh, Caiming Xiong, Jason J. Corso; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 2264-2273

Abstract


Can humans fly? Emphatically no. Can cars eat? Again, absolutely not. Yet, these absurd inferences result from the current disregard for particular types of actors in action understanding. There is no work we know of on simultaneously inferring actors and actions in the video, not to mention a dataset to experiment with. Our paper hence marks the first effort in the computer vision community to jointly consider various types of actors undergoing various actions. To start with the problem, we collect a dataset of 3782 videos from YouTube and label both pixel-level actors and actions in each video. We formulate the general actor-action understanding problem and instantiate it at various granularities: both video-level single- and multiple-label actor-action recognition and pixel-level actor-action semantic segmentation. Our experiments demonstrate that inference jointly over actors and actions outperforms inference independently over them, and hence concludes our argument of the value of explicit consideration of various actors in comprehensive action understanding.

Related Material


[pdf]
[bibtex]
@InProceedings{Xu_2015_CVPR,
author = {Xu, Chenliang and Hsieh, Shao-Hang and Xiong, Caiming and Corso, Jason J.},
title = {Can Humans Fly? Action Understanding With Multiple Classes of Actors},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2015}
}