Spatio-temporal Context Modeling for BoW-Based Video Classification

Saehoon Yi, Vladimir Pavlovic; Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops, 2013, pp. 779-786

Abstract


We propose an autocorrelation Cox process that extends the traditional bag-of-words representation to model the spatio-temporal context within a video sequence. Bag-ofwords models are effective tools for representing a video by a histogram of visual words that describe local appearance and motion. A major limitation of this model is its inability to encode the spatio-temporal structure of visual words pertaining to the context of the video. Several works have proposed to remedy this by learning the pairwise correlations between words. However, pairwise analysis leads to a quadratic increase in the number of features, making the models prone to overfitting and challenging to learn from data. The proposed autocorrelation Cox process model encodes, in a compact way, the contextual information within a video sequence, leading to improved classification performance. Spatio-temporal autocorrelations of visual words estimated from the Cox process are coupled with the information gain feature selection to discern the essential structure for the classification task. Experiments on crowd activity and human action dataset illustrate that the proposed model achieves state-of-the-art performance while providing intuitive spatio-temporal descriptors of the video context.

Related Material


[pdf]
[bibtex]
@InProceedings{Yi_2013_ICCV_Workshops,
author = {Saehoon Yi and Vladimir Pavlovic},
title = {Spatio-temporal Context Modeling for BoW-Based Video Classification},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {June},
year = {2013}
}