ZeroRF: Fast Sparse View 360deg Reconstruction with Zero Pretraining

Ruoxi Shi, Xinyue Wei, Cheng Wang, Hao Su; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 21114-21124

Abstract


We present ZeroRF a novel per-scene optimization method addressing the challenge of sparse view 360deg reconstruction in neural field representations. Current breakthroughs like Neural Radiance Fields (NeRF) have demonstrated high-fidelity image synthesis but struggle with sparse input views. Existing methods such as Generalizable NeRFs and per-scene optimization approaches face limitations in data dependency computational cost and generalization across diverse scenarios. To overcome these challenges we propose ZeroRF whose key idea is to integrate a tailored Deep Image Prior into a factorized NeRF representation. Unlike traditional methods ZeroRF parametrizes feature grids with a neural network generator enabling efficient sparse view 360deg reconstruction without any pretraining or additional regularization. Extensive experiments showcase ZeroRF's versatility and superiority in terms of both quality and speed achieving state-of-the-art results on benchmark datasets. ZeroRF's significance extends to applications in 3D content generation and editing. Project page: https://sarahweiii.github.io/zerorf/

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Shi_2024_CVPR, author = {Shi, Ruoxi and Wei, Xinyue and Wang, Cheng and Su, Hao}, title = {ZeroRF: Fast Sparse View 360deg Reconstruction with Zero Pretraining}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {21114-21124} }