Learning to Capture Light Fields through a Coded Aperture Camera

Yasutaka Inagaki, Yuto Kobayashi, Keita Takahashi, Toshiaki Fujii, Hajime Nagahara; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 418-434

Abstract


We propose a learning-based framework for acquiring a light field through a coded aperture camera. Acquiring a light field is a challenging task due to the amount of data. To make the acquisition process efficient, coded aperture cameras were successfully adopted; using these cameras, a light field is computationally reconstructed from several images that are acquired with different aperture patterns. However, it is still difficult to reconstruct a high-quality light field from only a few acquired images. To tackle this limitation, we formulated the entire pipeline of light field acquisition from the perspective of an auto-encoder. This auto-encoder was implemented as a stack of fully convolutional layers and was trained end-to-end by using a collection of training samples. We experimentally show that our method can successfully learn good image-acquisition and reconstruction strategies. With our method, light fields consisting of 5 x 5 or 8 x 8 images can be successfully reconstructed only from a few acquired images. Moreover, our method achieved superior performance over several state-of-the-art methods. We also applied our method to a real prototype camera to show that it is capable of capturing a real 3-D scene.

Related Material


[pdf]
[bibtex]
@InProceedings{Inagaki_2018_ECCV,
author = {Inagaki, Yasutaka and Kobayashi, Yuto and Takahashi, Keita and Fujii, Toshiaki and Nagahara, Hajime},
title = {Learning to Capture Light Fields through a Coded Aperture Camera},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}