FullFusion: A Framework for Semantic Reconstruction of Dynamic Scenes

Mihai Bujanca, Mikel Lujan, Barry Lennox; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 0-0

Abstract


Assuming that scenes are static is common in SLAM research. However, the world is complex, dynamic, and features interactive agents. Mobile robots operating in a variety of environments in real-life scenarios require an advanced level of understanding of their surroundings. Therefore, it is crucial to find effective ways of representing the world in its dynamic complexity, beyond the geometry of static scene elements. We present a framework that enables incremental reconstruction of semantically-annotated 3D models in dynamic settings using commodity RGB-D sensors. Our method is the first to perform semantic reconstruction of non-rigidly deforming objects along with a static background. FullFusion is a step towards enabling robots to have a deeper and richer understanding of their surroundings, and can facilitate the study of interaction and scene dynamics. To showcase the potential of FullFusion, we provide a quantitative and qualitative evaluation on a baseline implementation which employs specific reconstruction and segmentation pipelines. It is, however, important to highlight that the modular design of the framework allows us to easily replace any of the components with new or existing counterparts.

Related Material


[pdf]
[bibtex]
@InProceedings{Bujanca_2019_ICCV,
author = {Bujanca, Mihai and Lujan, Mikel and Lennox, Barry},
title = {FullFusion: A Framework for Semantic Reconstruction of Dynamic Scenes},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2019}
}