Spatiotemporal Modeling for Crowd Counting in Videos
Feng Xiong, Xingjian Shi, Dit-Yan Yeung; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 5151-5159
Abstract
Region of Interest (ROI) crowd counting can be formulated as a regression problem of learning a mapping from an image or a video frame to a crowd density map. Recently, convolutional neural network (CNN) models have achieved promising results for crowd counting. However, even when dealing with video data, CNN-based methods still consider each video frame independently, ignoring the strong temporal correlation between neighboring frames. To exploit the otherwise very useful temporal information in video sequences, we propose a variant of a recent deep learning model called convolutional LSTM (ConvLSTM) for crowd counting. Unlike the previous CNN-based methods, our method fully captures both spatial and temporal dependencies. Furthermore, we extend the ConvLSTM model to a bidirectional ConvLSTM model which can access long-range information in both directions. Extensive experiments using four publicly available datasets demonstrate the reliability of our approach and the effectiveness of incorporating temporal information to boost the accuracy of crowd counting. In addition, we also conduct some transfer learning experiments to show that once our model is trained on one dataset, its learning experience can be transferred easily to a new dataset which consists of only very few video frames for model adaptation.
Related Material
[pdf]
[supp]
[arXiv]
[
bibtex]
@InProceedings{Xiong_2017_ICCV,
author = {Xiong, Feng and Shi, Xingjian and Yeung, Dit-Yan},
title = {Spatiotemporal Modeling for Crowd Counting in Videos},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}