PaReNeRF: Toward Fast Large-scale Dynamic NeRF with Patch-based Reference

Xiao Tang, Min Yang, Penghui Sun, Hui Li, Yuchao Dai, Feng Zhu, Hojae Lee; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 5428-5438

Abstract


With photo-realistic image generation Neural Radiance Field (NeRF) is widely used for large-scale dynamic scene reconstruction as autonomous driving simulator. However large-scale scene reconstruction still suffers from extremely long training time and rendering time. Low-resolution (LR) rendering combined with upsampling can alleviate this problem but it degrades image quality. In this paper we design a lightweight reference decoder which exploits prior information from known views to improve image reconstruction quality of new views. In addition to speed up prior information search we propose an optical flow and structural similarity based prior information search method. Results on KITTI and VKITTI2 datasets show that our method significantly outperforms the baseline method in terms of training speed rendering speed and rendering quality.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Tang_2024_CVPR, author = {Tang, Xiao and Yang, Min and Sun, Penghui and Li, Hui and Dai, Yuchao and Zhu, Feng and Lee, Hojae}, title = {PaReNeRF: Toward Fast Large-scale Dynamic NeRF with Patch-based Reference}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {5428-5438} }