X-Section: Cross-Section Prediction for Enhanced RGB-D Fusion

Andrea Nicastro, Ronald Clark, Stefan Leutenegger; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 1517-1526

Abstract


Detailed 3D reconstruction is an important challenge with application to robotics, augmented and virtual reality, which has seen impressive progress throughout the past years. Advancements were driven by the availability of depth cameras (RGB-D), as well as increased compute power, e.g. in the form of GPUs -- but also thanks to inclusion of machine learning in the process. Here, we propose X-Section, an RGB-D 3D reconstruction approach that leverages deep learning to make object-level predictions about thicknesses that can be readily integrated into a volumetric multi-view fusion process, where we propose an extension to the popular KinectFusion approach. In essence, our method allows to complete shape in general indoor scenes behind what is sensed by the RGB-D camera, which may be crucial e.g. for robotic manipulation tasks or efficient scene exploration. Predicting object thicknesses rather than volumes allows us to work with comparably high spatial resolution without exploding memory and training data requirements on the employed Convolutional Neural Networks. In a series of qualitative and quantitative evaluations, we demonstrate how we accurately predict object thickness and reconstruct general 3D scenes containing multiple objects.

Related Material


[pdf] [supp] [video]
[bibtex]
@InProceedings{Nicastro_2019_ICCV,
author = {Nicastro, Andrea and Clark, Ronald and Leutenegger, Stefan},
title = {X-Section: Cross-Section Prediction for Enhanced RGB-D Fusion},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}