-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Weber_2024_CVPR, author = {Weber, Ethan and Holynski, Aleksander and Jampani, Varun and Saxena, Saurabh and Snavely, Noah and Kar, Abhishek and Kanazawa, Angjoo}, title = {NeRFiller: Completing Scenes via Generative 3D Inpainting}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {20731-20741} }
NeRFiller: Completing Scenes via Generative 3D Inpainting
Abstract
We propose NeRFiller an approach that completes missing portions of a 3D capture via generative 3D inpainting using off-the-shelf 2D visual generative models. Often parts of a captured 3D scene or object are missing due to mesh reconstruction failures or a lack of observations (e.g. contact regions such as the bottom of objects or hard-to-reach areas). We approach this challenging 3D inpainting problem by leveraging a 2D inpainting diffusion model. We identify a surprising behavior of these models where they generate more 3D consistent inpaints when images form a 2x2 grid and show how to generalize this behavior to more than four images. We then present an iterative framework to distill these inpainted regions into a single consistent 3D scene. In contrast to related works we focus on completing scenes rather than deleting foreground objects and our approach does not require tight 2D object masks or text. We compare our approach to relevant baselines adapted to our setting on a variety of scenes where NeRFiller creates the most 3D consistent and plausible scene completions. Our project page is at https://ethanweber.me/nerfiller/.
Related Material