An Epipolar Volume Autoencoder With Adversarial Loss for Deep Light Field Super-Resolution

Minchen Zhu, Anna Alperovich, Ole Johannsen, Antonin Sulc, Bastian Goldluecke; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019, pp. 0-0

Abstract


When capturing a light field of a scene, one typically faces a trade-off between more spatial or more angular resolution. Fortunately, light fields are also a rich source of information for solving the problem of super-resolution. Contrary to single image approaches, where high-frequency content has to be hallucinated to be the most likely source of the downscaled version, sub-aperture views from the light field can help with an actual reconstruction of those details that have been removed by downsampling. In this paper, we propose a three-dimensional generative adversarial autoencoder network to recover the high-resolution light field from a low-resolution light field with a sparse set of viewpoints. We require only three views along both horizontal and vertical axis to increase angular resolution by a factor of three while at the same time increasing spatial resolution by a factor of either two or four in each direction, respectively.

Related Material


[pdf]
[bibtex]
@InProceedings{Zhu_2019_CVPR_Workshops,
author = {Zhu, Minchen and Alperovich, Anna and Johannsen, Ole and Sulc, Antonin and Goldluecke, Bastian},
title = {An Epipolar Volume Autoencoder With Adversarial Loss for Deep Light Field Super-Resolution},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}