Revisiting Depth Layers from Occlusions

Adarsh Kowdle, Andrew Gallagher, Tsuhan Chen; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 2091-2098

Abstract


In this work, we consider images of a scene with a moving object captured by a static camera. As the object (human or otherwise) moves about the scene, it reveals pairwise depth-ordering or occlusion cues. The goal of this work is to use these sparse occlusion cues along with monocular depth occlusion cues to densely segment the scene into depth layers. We cast the problem of depth-layer segmentation as a discrete labeling problem on a spatiotemporal Markov Random Field (MRF) that uses the motion occlusion cues along with monocular cues and a smooth motion prior for the moving object. We quantitatively show that depth ordering produced by the proposed combination of the depth cues from object motion and monocular occlusion cues are superior to using either feature independently, and using a na??ve combination of the features.

Related Material


[pdf]
[bibtex]
@InProceedings{Kowdle_2013_CVPR,
author = {Kowdle, Adarsh and Gallagher, Andrew and Chen, Tsuhan},
title = {Revisiting Depth Layers from Occlusions},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2013}
}