Multi-Cue Structure Preserving MRF for Unconstrained Video Segmentation

Saehoon Yi, Vladimir Pavlovic; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015, pp. 3262-3270

Abstract


Video segmentation is a stepping stone to understanding video context. Video segmentation enables one to represent a video by decomposing it into coherent regions which comprise whole or parts of objects. However, the challenge originates from the fact that most of the video segmentation algorithms are based on unsupervised learning due to expensive cost of pixelwise video annotation and intra-class variability within similar unconstrained video classes. We propose a Markov Random Field model for unconstrained video segmentation that relies on tight integration of multiple cues: vertices are defined from contour based superpixels, unary potentials from temporally smooth label likelihood and pairwise potentials from global structure of a video. Multi-cue structure is a breakthrough to extracting coherent object regions for unconstrained videos in absence of supervision. Our experiments on VSB100 dataset show that the proposed model significantly outperforms competing state-of-the-art algorithms. Qualitative analysis illustrates that video segmentation result of the proposed model is consistent with human perception of objects.

Related Material


[pdf]
[bibtex]
@InProceedings{Yi_2015_ICCV,
author = {Yi, Saehoon and Pavlovic, Vladimir},
title = {Multi-Cue Structure Preserving MRF for Unconstrained Video Segmentation},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}
}