Continual Neural Mapping: Learning an Implicit Scene Representation From Sequential Observations

Zike Yan, Yuxin Tian, Xuesong Shi, Ping Guo, Peng Wang, Hongbin Zha; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 15782-15792

Abstract


Recent advances have enabled a single neural network to serve as an implicit scene representation, establishing the mapping function between spatial coordinates and scene properties. In this paper, we make a further step towards continual learning of the implicit scene representation directly from sequential observations, namely Continual Neural Mapping. The proposed problem setting bridges the gap between batch-trained implicit neural representations and commonly used streaming data in robotics and vision communities. We introduce an experience replay approach to tackle an exemplary task of continual neural mapping: approximating a continuous signed distance function (SDF) from sequential depth images as a scene geometry representation. We show for the first time that a single network can represent scene geometry over time continually without catastrophic forgetting, while achieving promising trade-offs between accuracy and efficiency.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Yan_2021_ICCV, author = {Yan, Zike and Tian, Yuxin and Shi, Xuesong and Guo, Ping and Wang, Peng and Zha, Hongbin}, title = {Continual Neural Mapping: Learning an Implicit Scene Representation From Sequential Observations}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {15782-15792} }