Video Object Segmentation by Salient Segment Chain Composition

Dan Banica, Alexandru Agape, Adrian Ion, Cristian Sminchisescu; Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops, 2013, pp. 283-290

Abstract


We present a model for video segmentation, applicable to RGB (and if available RGB-D) information that constructs multiple plausible partitions corresponding to the static and the moving objects in the scene: i) we generate multiple figure-ground segmentations, in each frame, parametrically, based on boundary and optical flow cues, then track, link and refine the salient segment chains corresponding to the different objects, over time, using long-range temporal constraints; ii) a video partition is obtained by composing segment chains into consistent tilings, where the different individual object chains explain the video and do not overlap. Saliency metrics based on figural and motion cues, as well as measures learned from human eye movements are exploited, with substantial gain, at the level of segment generation and chain construction, in order to produce compact sets of hypotheses which correctly reflect the qualities of the different configurations. The model makes it possible to compute multiple hypotheses over both individual object segmentations tracked over time, and for complete video partitions. We report quantitative, state of the art results in the SegTrack single object benchmark, and promising qualitative and quantitative results in clips filming multiple static and moving objects collected from Hollywood movies and from the MIT dataset.

Related Material


[pdf]
[bibtex]
@InProceedings{Banica_2013_ICCV_Workshops,
author = {Dan Banica and Alexandru Agape and Adrian Ion and Cristian Sminchisescu},
title = {Video Object Segmentation by Salient Segment Chain Composition},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {June},
year = {2013}
}