CLNeRF: Continual Learning Meets NeRF

Zhipeng Cai, Matthias Müller; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 23185-23194

Abstract


Novel view synthesis aims to render unseen views given a set of calibrated images. In practical applications, the coverage, appearance or geometry of the scene may change over time, with new images continuously being captured. Efficiently incorporating such continuous change is an open challenge. Standard NeRF benchmarks only involve scene coverage expansion. To study other practical scene changes, we propose a new dataset, World Across Time (WAT), consisting of scenes that change in appearance and geometry over time. We also propose a simple yet effective method, CLNeRF, which introduces continual learning (CL) to Neural Radiance Fields (NeRFs). CLNeRF combines generative replay and the Instant Neural Graphics Primitives (NGP) architecture to effectively prevent catastrophic forgetting and efficiently update the model when new data arrives. We also add trainable appearance and geometry embeddings to NGP, allowing a single compact model to handle complex scene changes. Without the need to store historical images, CLNeRF trained sequentially over multiple scans of a changing scene performs on-par with the upper bound model trained on all scans at once. Compared to other CL baselines CLNeRF performs much better across standard benchmarks and WAT. The source code, a demo, and the WAT dataset are available at https://github.com/IntelLabs/CLNeRF.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Cai_2023_ICCV, author = {Cai, Zhipeng and M\"uller, Matthias}, title = {CLNeRF: Continual Learning Meets NeRF}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {23185-23194} }