Light Field Scale-Depth Space Transform for Dense Depth Estimation

Ivana Tosic, Kathrin Berkner; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2014, pp. 435-442

Abstract


Recent development of hand-held plenoptic cameras has brought light field acquisition into many practical and low-cost imaging applications. We address a crucial challenge in light field data processing: dense depth estimation of 3D scenes captured by camera arrays or plenoptic cameras. We first propose a method for construction of light field scale-depth spaces, by convolving a given light field with a special kernel adapted to the light field structure. We detect local extrema in such scale-depth spaces, which indicate the regions of constant depth, and convert them to dense depth maps after solving occlusion conflicts in a consistent way across all views. Due to the multi-scale characterization of objects in proposed representations, our method provides depth estimates for both uniform and textured regions, where uniform regions with large spatial extent are captured at coarser scales and textured regions are found at finer scales. Experimental results on the HCI (Heidelberg Collaboratory for Image Processing) light field benchmark show that our method gives state of the art depth accuracy. We also show results on plenoptic images from the RAYTRIX camera and our plenoptic camera prototype.

Related Material


[pdf]
[bibtex]
@InProceedings{Tosic_2014_CVPR_Workshops,
author = {Tosic, Ivana and Berkner, Kathrin},
title = {Light Field Scale-Depth Space Transform for Dense Depth Estimation},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2014}
}