Novel View Synthesis with View-Dependent Effects from a Single Image

Juan Luis Gonzalez Bello, Munchurl Kim; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 10413-10423

Abstract


In this paper we address single image-based novel view synthesis (NVS) by firstly integrating view-dependent effects (VDE) into the process. Our approach leverages camera motion priors to model VDE treating negative disparity as the representation of these effects in the scene. By identifying that specularities align with camera motion we infuse VDEs into input images by aggregating pixel colors along the negative depth region of epipolar lines. Additionally we introduce a relaxed volumetric rendering approximation enhancing efficiency by computing densities in a single pass for NVS from single images. Notably our method learns single-image NVS from image sequences alone making it a fully self-supervised learning approach that requires no depth or camera pose annotations. We present extensive experimental results and show that our proposed method can learn NVS with VDEs outperforming the SOTA single-view NVS methods on the RealEstate10k and MannequinChallenge datasets. Visit our project site https://kaist-viclab.github.io/monovde-site.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Bello_2024_CVPR, author = {Bello, Juan Luis Gonzalez and Kim, Munchurl}, title = {Novel View Synthesis with View-Dependent Effects from a Single Image}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {10413-10423} }