SIGNET: Efficient Neural Representation for Light Fields

Brandon Yushan Feng, Amitabh Varshney; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 14224-14233

Abstract


We present a novel neural representation for light field content that enables compact storage and easy local reconstruction with high fidelity. We use a fully-connected neural network to learn the mapping function between each light field pixel's coordinates and its corresponding color values. However, neural networks that simply take in raw coordinates are unable to accurately learn data containing fine details. We present an input transformation strategy based on the Gegenbauer polynomials which previously showed theoretical advantages over the Fourier basis. We conduct experiments that show our Gegenbauer-based design combined with sinusoidal activation functions leads to a better light field reconstruction quality than a variety of network designs, including those with Fourier-inspired techniques introduced by prior works. Moreover, our SInusoidal Gegenbauer NETwork, or SIGNET, can represent light field scenes more compactly than the state-of-the-art compression methods while maintaining a comparable reconstruction quality. SIGNET also innately allows random access to encoded light field pixels due to its functional design. Furthermore, we demonstrate that SIGNET facilitates super-resolution along the spatial, angular, and temporal dimensions of a light field without any additional training.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Feng_2021_ICCV, author = {Feng, Brandon Yushan and Varshney, Amitabh}, title = {SIGNET: Efficient Neural Representation for Light Fields}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {14224-14233} }