Sparse Depth Super Resolution

Jiajun Lu, David Forsyth; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 2245-2253

Abstract


We describe a method to produce detailed high resolution depth maps from aggressively subsampled depth measurements. Our method fully uses the relationship between image segmentation boundaries and depth boundaries. It uses an image combined with a low resolution depth map. 1) The image is segmented with the guidance of sparse depth samples. 2) Each segment has its depth field reconstructed independently using a novel smoothing method. 3) For videos, time-stamped samples from near frames are incorporated. The paper shows reconstruction results of super resolution from x4 to x100, while previous methods mainly work on x2 to x16. The method is tested on four different datasets and six video sequences, covering quite different regimes, and it outperforms recent state of the art methods quantitatively and qualitatively. We also demonstrate that depth maps produced by our method can be used by applications such as hand trackers, while depth maps from other methods have problems.

Related Material


[pdf]
[bibtex]
@InProceedings{Lu_2015_CVPR,
author = {Lu, Jiajun and Forsyth, David},
title = {Sparse Depth Super Resolution},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2015}
}