Temporally-Dependent Dirichlet Process Mixtures for Egocentric Video Segmentation

Joseph W. Barker, James W. Davis; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2014, pp. 557-564

Abstract


In this paper, we present a novel approach for segmenting video into large regions of generally similar activity. Based on the Dirichlet Process Multinomial Mixture model, we introduce temporal dependency into the inference algorithm, allowing our method to automatically create long segments with high saliency while ignoring small, inconsequential interruptions. We evaluate our algorithm and other topic models with both synthetic datasets and real-world video. Additionally, applicability to image segmentation is shown. Results show that our method outperforms related methods with respect to accuracy and noise removal.

Related Material


[pdf]
[bibtex]
@InProceedings{Barker_2014_CVPR_Workshops,
author = {Barker, Joseph W. and Davis, James W.},
title = {Temporally-Dependent Dirichlet Process Mixtures for Egocentric Video Segmentation},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2014}
}