Learning to Synthesize a 4D RGBD Light Field From a Single Image
Pratul P. Srinivasan, Tongzhou Wang, Ashwin Sreelal, Ravi Ramamoorthi, Ren Ng; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2243-2251
Abstract
We present a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction). For training, we introduce the largest public light field dataset, consisting of over 3300 plenoptic camera light fields of scenes containing flowers and plants. Our synthesis pipeline consists of a convolutional neural network (CNN) that estimates scene geometry, a stage that renders a Lambertian light field using that geometry, and a second CNN that predicts occluded rays and non-Lambertian effects. Our algorithm builds on recent view synthesis methods, but is unique in predicting RGBD for each light field ray and improving unsupervised single image depth estimation by enforcing consistency of ray depths that should intersect the same scene point.
Related Material
[pdf]
[supp]
[arXiv]
[video]
[
bibtex]
@InProceedings{Srinivasan_2017_ICCV,
author = {Srinivasan, Pratul P. and Wang, Tongzhou and Sreelal, Ashwin and Ramamoorthi, Ravi and Ng, Ren},
title = {Learning to Synthesize a 4D RGBD Light Field From a Single Image},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}