-
[pdf]
[arXiv]
[bibtex]@InProceedings{Zhang_2022_CVPR, author = {Zhang, Jian and Zhang, Yuanqing and Fu, Huan and Zhou, Xiaowei and Cai, Bowen and Huang, Jinchi and Jia, Rongfei and Zhao, Binqiang and Tang, Xing}, title = {Ray Priors Through Reprojection: Improving Neural Radiance Fields for Novel View Extrapolation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {18376-18386} }
Ray Priors Through Reprojection: Improving Neural Radiance Fields for Novel View Extrapolation
Abstract
Neural Radiance Fields (NeRF) have emerged as a potent paradigm for representing scenes and synthesizing photo-realistic images. A main limitation of conventional NeRFs is that they often fail to produce high-quality renderings under novel viewpoints that are significantly different from the training viewpoints. In this paper, instead of exploiting few-shot image synthesis, we study the novel view extrapolation setting that (1) the training images can well describe an object, and (2) there is a notable discrepancy between the training and test viewpoints' distributions. We present RapNeRF (RAy Priors) as a solution. Our insight is that the inherent appearances of a 3D surface's arbitrary visible projections should be consistent. We thus propose a random ray casting policy that allows training unseen views using seen views. Furthermore, we show that a ray atlas pre-computed from the observed rays' viewing directions could further enhance the rendering quality for extrapolated views. A main limitation is that RapNeRF would remove the strong view-dependent effects because it leverages the multi-view consistency property.
Related Material