vMAP: Vectorised Object Mapping for Neural Field SLAM

Xin Kong, Shikun Liu, Marwan Taher, Andrew J. Davison; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 952-961

Abstract


We present vMAP, an object-level dense SLAM system using neural field representations. Each object is represented by a small MLP, enabling efficient, watertight object modelling without the need for 3D priors. As an RGB-D camera browses a scene with no prior information, vMAP detects object instances on-the-fly, and dynamically adds them to its map. Specifically, thanks to the power of vectorised training, vMAP can optimise as many as 50 individual objects in a single scene, with an extremely efficient training speed of 5Hz map update. We experimentally demonstrate significantly improved scene-level and object-level reconstruction quality compared to prior neural field SLAM systems. Project page: https://kxhit.github.io/vMAP.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Kong_2023_CVPR, author = {Kong, Xin and Liu, Shikun and Taher, Marwan and Davison, Andrew J.}, title = {vMAP: Vectorised Object Mapping for Neural Field SLAM}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2023}, pages = {952-961} }