Activity Auto-Completion: Predicting Human Activities From Partial Videos

Zhen Xu, Laiyun Qing, Jun Miao; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015, pp. 3191-3199

Abstract


In this paper, we propose an activity auto-completion (AAC) model for human activity prediction by formulating activity prediction as a query auto-completion (QAC) problem in information retrieval. First, we extract discriminative patches in frames of videos. A video is represented based on these patches and divided into a collection of segments, each of which is regarded as a character typed in the search box. Then a partially observed video is considered as an activity prefix, consisting of one or more characters. Finally, the missing observation of an activity is predicted as the activity candidates provided by the auto-completion model. The candidates are matched against the activity prefix on-the-fly and ranked by a learning-to-rank algorithm. We validate our method on UT-Interaction Set #1 and Set #2 [19]. The experimental results show that the proposed activity auto-completion model achieves promising performance.

Related Material


[pdf]
[bibtex]
@InProceedings{Xu_2015_ICCV,
author = {Xu, Zhen and Qing, Laiyun and Miao, Jun},
title = {Activity Auto-Completion: Predicting Human Activities From Partial Videos},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}
}