Depth From Defocus in the Wild

Huixuan Tang, Scott Cohen, Brian Price, Stephen Schiller, Kiriakos N. Kutulakos; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 2740-2748

Abstract


We consider the problem of two-frame depth from defocus in conditions unsuitable for existing methods yet typical of everyday photography: a handheld cellphone camera, a small aperture, a non-stationary scene and sparse surface texture. Our approach combines a global analysis of image content---3D surfaces, deformations, figure-ground relations, textures---with local estimation of joint depth-flow likelihoods in tiny patches. To enable local estimation we (1) derive novel defocus-equalization filters that induce brightness constancy across frames and (2) impose a tight upper bound on defocus blur---just three pixels in radius---through an appropriate choice of the second frame. For global analysis we use a novel piecewise-spline scene representation that can propagate depth and flow across large irregularly-shaped regions. Our experiments show that this combination preserves sharp boundaries and yields good depth and flow maps in the face of significant noise, uncertainty, non-rigidity, and data sparsity.

Related Material


[pdf]
[bibtex]
@InProceedings{Tang_2017_CVPR,
author = {Tang, Huixuan and Cohen, Scott and Price, Brian and Schiller, Stephen and Kutulakos, Kiriakos N.},
title = {Depth From Defocus in the Wild},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {July},
year = {2017}
}