Instant Continual Learning of Neural Radiance Fields

Ryan Po, Zhengyang Dong, Alexander W. Bergman, Gordon Wetzstein; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2023, pp. 3334-3344

Abstract


Neural radiance fields (NeRFs) have emerged as an effective method for novel-view synthesis and 3D scene reconstruction. However, conventional training methods require access to all training views during scene optimization. This assumption may be prohibitive in continual learning scenarios, where new data is acquired in a sequential manner and a continuous update of the NeRF is desired, as in automotive applications or aerial imaging. When naively trained in such a continual setting, traditional scene representation frameworks suffer from catastrophic forgetting, where previously learned knowledge is corrupted after training on new data. Prior works in alleviating forgetting with NeRFs suffer from low reconstruction quality and high latency, making them impractical for real-world application. We propose a continual learning framework for training NeRFs that leverages replay-based methods combined with a hybrid explicit-implicit scene representation. Our method outperforms previous methods in reconstruction quality when trained in a continual setting, while having the additional benefit of being an order of magnitude faster.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Po_2023_ICCV, author = {Po, Ryan and Dong, Zhengyang and Bergman, Alexander W. and Wetzstein, Gordon}, title = {Instant Continual Learning of Neural Radiance Fields}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2023}, pages = {3334-3344} }