-
[pdf]
[supp]
[bibtex]@InProceedings{Wan_2023_CVPR, author = {Wan, Ziyu and Richardt, Christian and Bo\v{z}i\v{c}, Alja\v{z} and Li, Chao and Rengarajan, Vijay and Nam, Seonghyeon and Xiang, Xiaoyu and Li, Tuotuo and Zhu, Bo and Ranjan, Rakesh and Liao, Jing}, title = {Learning Neural Duplex Radiance Fields for Real-Time View Synthesis}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2023}, pages = {8307-8316} }
Learning Neural Duplex Radiance Fields for Real-Time View Synthesis
Abstract
Neural radiance fields (NeRFs) enable novel view synthesis with unprecedented visual quality. However, to render photorealistic images, NeRFs require hundreds of deep multilayer perceptron (MLP) evaluations -- for each pixel. This is prohibitively expensive and makes real-time rendering infeasible, even on powerful modern GPUs. In this paper, we propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations that are fully compatible with the massively parallel graphics rendering pipeline. We represent scenes as neural radiance features encoded on a two-layer duplex mesh, which effectively overcomes the inherent inaccuracies in 3D surface reconstruction by learning the aggregated radiance information from a reliable interval of ray-surface intersections. To exploit local geometric relationships of nearby pixels, we leverage screen-space convolutions instead of the MLPs used in NeRFs to achieve high-quality appearance. Finally, the performance of the whole framework is further boosted by a novel multi-view distillation optimization strategy. We demonstrate the effectiveness and superiority of our approach via extensive experiments on a range of standard datasets.
Related Material