Supplementary Material for Depth from Shading, Defocus, and Correspondence Using Light-Field Angular Coherence

CVPR

In this supplementary material, we compare our depth estimation against the Lytro software, Barron and Malik 2013, Wanner and Goldluecke 2012, and Tao et al. 2013, and our shading estimation against Chen and Koltun 2013.

In our experiement, we used the default parameters given by the authors. For Barron and Malik 2013 and Chen and Koltun 2013, for the initial input depth map, we used our regularized depth (without using shading constraints). We included interesting examples with smooth surfaces and even textured surfaces with shading cues (dinosaur and leaf).

We show that in general scenes, our depth estimation accurately estimates depth. For the Lytro Illum, the factory calibration and in-house post-processing show promising results; however, they fall short in handling noise and do not use shading information to enhance shape estimation. As stated in the paper, the Barron and Malik algorithm is designed for active systems that provide denser depth estimation in smooth surfaces. In the case of light-field data, the depth estimation is not as dense and, as a result, the shading output is unstable (we did not include their shading results in our comparisons because their shading images fluctuate between black images and images similar to their depth estimations). For Wanner and Goldluecke, their algorithm performs similar to our initial depth estimation, as in regularized without shading information, but generally shows errors in flat surfaces and becomes unstable with higher noise images (leaf). In Tao et al. 2013, we see blockiness and regularization errors. Our initial depth estimation performs favorably against different inputs and is robust to noise, but produces flat results. Our results regularized with shading constraints show a better estimation of the surfaces. For shading, we compared against Chen and Koltun, where we show our shading estimation is improved by using the light-field data.

We used images captured by both the Lytro Illum and the original Lytro camera with different camera settings (focal length, ISO, etc.). Dinosaur and leaf do not have Lytro Illum results because they were taken with the original Lytro camera.

Input Central View Image*

Our Final Result

Our Initial

Lytro Illum [1]

Barron and Malik [2]

Wanner and Goldluecke [3]

Tao et al. [4]

Our Shading

View 1: Our Final Result









View 1: Our Initial









View 1: Lytro Illum [1]









View 1: Barron and Malik [2]









View 1: Wanner and Goldluecke [3]









View 1: Tao et al. [4]









Chen and Koltun Shading [5]

View 2: Our Final Result









View 2: Our Initial









View 2: Lytro Illum [1]









View 2: Barron and Malik [2]









View 2: Wanner and Goldluecke [3]









View 2: Tao et al. [4]










View 3: Our Final Result









View 3: Our Initial









View 3: Lytro Illum [1]









View 3: Barron and Malik [2]









View 3: Wanner and Goldluecke [3]









View 3: Tao et al. [4]









Input Central View Image*

Our Final Result

Our Initial

Lytro Illum [1]

Barron and Malik [2]

Wanner and Goldluecke [3]

Tao et al. [4]

Our Shading

View 1: Our Final Result









View 1: Our Initial









View 1: Lytro Illum [1]









View 1: Barron and Malik [2]









View 1: Wanner and Goldluecke [3]









View 1: Tao et al. [4]









Chen and Koltun Shading [5]

View 2: Our Final Result









View 2: Our Initial









View 2: Lytro Illum [1]









View 2: Barron and Malik [2]









View 2: Wanner and Goldluecke [3]









View 2: Tao et al. [4]










View 3: Our Final Result









View 3: Our Initial









View 3: Lytro Illum [1]









View 3: Barron and Malik [2]









View 3: Wanner and Goldluecke [3]









View 3: Tao et al. [4]









Input Central View Image*

Our Final Result

Our Initial

Lytro Illum [1]

Barron and Malik [2]

Wanner and Goldluecke [3]

Tao et al. [4]

Our Shading

View 1: Our Final Result









View 1: Our Initial









View 1: Lytro Illum [1]









View 1: Barron and Malik [2]









View 1: Wanner and Goldluecke [3]









View 1: Tao et al. [4]









Chen and Koltun Shading [5]

View 2: Our Final Result









View 2: Our Initial









View 2: Lytro Illum [1]









View 2: Barron and Malik [2]









View 2: Wanner and Goldluecke [3]









View 2: Tao et al. [4]










View 3: Our Final Result









View 3: Our Initial









View 3: Lytro Illum [1]









View 3: Barron and Malik [2]









View 3: Wanner and Goldluecke [3]









View 3: Tao et al. [4]









Input Central View Image*

Our Final Result

Our Initial

Barron and Malik [2]

Wanner and Goldluecke [3]

Tao et al. [4]

Our Shading

View 1: Our Final Result









View 1: Our Initial









View 1: Barron and Malik [2]









View 1: Wanner and Goldluecke [3]









View 1: Tao et al. [4]









Chen and Koltun Shading [5]

View 2: Our Final Result









View 2: Our Initial









View 2: Barron and Malik [2]









View 2: Wanner and Goldluecke [3]









View 2: Tao et al. [4]










View 3: Our Final Result









View 3: Our Initial









View 3: Barron and Malik [2]









View 3: Wanner and Goldluecke [3]









View 3: Tao et al. [4]









Input Central View Image*

Our Final Result

Our Initial

Barron and Malik [2]

Wanner and Goldluecke [3]

Tao et al. [4]

Our Shading

View 1: Our Final Result









View 1: Our Initial









View 1: Barron and Malik [2]









View 1: Wanner and Goldluecke [3]









View 1: Tao et al. [4]









Chen and Koltun Shading [5]

View 2: Our Final Result









View 2: Our Initial









View 2: Barron and Malik [2]









View 2: Wanner and Goldluecke [3]









View 2: Tao et al. [4]










View 3: Our Final Result









View 3: Our Initial









View 3: Barron and Malik [2]









View 3: Wanner and Goldluecke [3]









View 3: Tao et al. [4]









Input Central View Image*

Our Final Result

Our Initial

Lytro Illum [1]

Barron and Malik [2]

Wanner and Goldluecke [3]

Tao et al. [4]

Our Shading

View 1: Our Final Result









View 1: Our Initial









View 1: Lytro Illum [1]









View 1: Barron and Malik [2]









View 1: Wanner and Goldluecke [3]









View 1: Tao et al. [4]









Chen and Koltun Shading [5]

View 2: Our Final Result









View 2: Our Initial









View 2: Lytro Illum [1]









View 2: Barron and Malik [2]









View 2: Wanner and Goldluecke [3]









View 2: Tao et al. [4]










View 3: Our Final Result









View 3: Our Initial









View 3: Lytro Illum [1]









View 3: Barron and Malik [2]









View 3: Wanner and Goldluecke [3]









View 3: Tao et al. [4]









Input Central View Image*

Our Final Result

Our Initial

Lytro Illum [1]

Barron and Malik [2]

Wanner and Goldluecke [3]

Tao et al. [4]

Our Shading

View 1: Our Final Result









View 1: Our Initial









View 1: Lytro Illum [1]









View 1: Barron and Malik [2]









View 1: Wanner and Goldluecke [3]









View 1: Tao et al. [4]









Chen and Koltun Shading [5]

View 2: Our Final Result









View 2: Our Initial









View 2: Lytro Illum [1]









View 2: Barron and Malik [2]









View 2: Wanner and Goldluecke [3]









View 2: Tao et al. [4]










View 3: Our Final Result









View 3: Our Initial









View 3: Lytro Illum [1]









View 3: Barron and Malik [2]









View 3: Wanner and Goldluecke [3]









View 3: Tao et al. [4]









Input Central View Image*

Our Final Result

Our Initial

Lytro Illum [1]

Barron and Malik [2]

Wanner and Goldluecke [3]

Tao et al. [4]

Our Shading

View 1: Our Final Result









View 1: Our Initial









View 1: Lytro Illum [1]









View 1: Barron and Malik [2]









View 1: Wanner and Goldluecke [3]









View 1: Tao et al. [4]









Chen and Koltun Shading [5]

View 2: Our Final Result









View 2: Our Initial









View 2: Lytro Illum [1]









View 2: Barron and Malik [2]









View 2: Wanner and Goldluecke [3]









View 2: Tao et al. [4]










View 3: Our Final Result









View 3: Our Initial









View 3: Lytro Illum [1]









View 3: Barron and Malik [2]









View 3: Wanner and Goldluecke [3]









View 3: Tao et al. [4]









* Input central view images are generated using our light-field processing engine



[1] Lytro Illum Software.
[2] J. Barron and J. Mallik. Intrinsic scene properties from a single rgb-d image. In CVPR, 2013
[3] S. Wanner and B. Goldluecke. Globally consistent depth labeling of 4D light fields. In CVPR, 2012.
[4] M. Tao, S. Hadap, J. Malik, and R. Ramamoorthi. Depth from combining defocus and correspondence using light-field camera. In ICCV, 2013.
[5] Q. Chen and V. Kulton. A simple model for intrinsic image decomposition with depth cues. In ICCV, 2013.


All images are shot with the Lytro Camera in multiple scenarios such as ISO, outdoors/indoors, focal length, and exposure.