A Flexible Scene Representation for 3D Reconstruction Using an RGB-D Camera

Diego Thomas, Akihiro Sugimoto; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2013, pp. 2800-2807

Abstract


Updating a global 3D model with live RGB-D measurements has proven to be successful for 3D reconstruction of indoor scenes. Recently, a Truncated Signed Distance Function (TSDF) volumetric model and a fusion algorithm have been introduced (KinectFusion), showing significant advantages such as computational speed and accuracy of the reconstructed scene. This algorithm, however, is expensive in memory when constructing and updating the global model. As a consequence, the method is not well scalable to large scenes. We propose a new flexible 3D scene representation using a set of planes that is cheap in memory use and, nevertheless, achieves accurate reconstruction of indoor scenes from RGB-D image sequences. Projecting the scene onto different planes reduces significantly the size of the scene representation and thus it allows us to generate a global textured 3D model with lower memory requirement while keeping accuracy and easiness to update with live RGB-D measurements. Experimental results demonstrate that our proposed flexible 3D scene representation achieves accurate reconstruction, while keeping the scalability for large indoor scenes.

Related Material


[pdf]
[bibtex]
@InProceedings{Thomas_2013_ICCV,
author = {Thomas, Diego and Sugimoto, Akihiro},
title = {A Flexible Scene Representation for 3D Reconstruction Using an RGB-D Camera},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2013}
}