Light Field Synthesis From a Monocular Image Using Variable LDI

Junhyeong Bak, In Kyu Park; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2023, pp. 3399-3407

Abstract


Recent advancements in learning-based novel view synthesis enable users to synthesize light field from a monocular image without special equipment. Moreover, the state-of-the-art techniques including multiplane image (MPI) show outstanding performance in synthesizing accurate light field from a monocular image. In this study, we propose a new variable layered depth image (VLDI) representation to generate precise light field synthesis results using only a few layers. Our method exploits LDI representation built on a new two-stream halfway fusion network and transformation process. This framework has an efficient structure that directly generates the region that does not require network prediction from inputs. As a result, the proposed method allows us to acquire high-quality light field easily and quickly. Experimental results show that the proposed method outperforms the previous works quantitatively and qualitatively for diverse examples.

Related Material


[pdf]
[bibtex]
@InProceedings{Bak_2023_CVPR, author = {Bak, Junhyeong and Park, In Kyu}, title = {Light Field Synthesis From a Monocular Image Using Variable LDI}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {3399-3407} }