-
[pdf]
[arXiv]
[bibtex]@InProceedings{Wang_2024_CVPR, author = {Wang, Chen and Wang, Angtian and Li, Junbo and Yuille, Alan and Xie, Cihang}, title = {Benchmarking Robustness in Neural Radiance Fields}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {2926-2936} }
Benchmarking Robustness in Neural Radiance Fields
Abstract
Neural Radiance Field (NeRF) has demonstrated excellent quality in novel view synthesis thanks to its ability to model 3D object geometries in a concise formulation. However current approaches to NeRF-based models rely on clean images with accurate camera calibration which can be difficult to obtain in the real world where data is often subject to corruption and distortion. In this work we provide the first comprehensive analysis of the robustness of NeRF-based novel view synthesis algorithms in the presence of different types of corruptions. We find that NeRF-based models are significantly degraded in the presence of corruption and are more sensitive to a different set of corruptions than image recognition models. Furthermore we analyze the robustness of the feature encoder in generalizable methods which synthesize images using neural features extracted via convolutional neural networks or transformers and find that it only contributes marginally to robustness. Finally we reveal that standard data augmentation techniques which can significantly improve the robustness of recognition models do not help the robustness of NeRF-based models. We hope our findings will attract more researchers to study the robustness of NeRF-based approaches and help improve their performance in the real world.
Related Material