Geometry-guided Feature Learning and Fusion for Indoor Scene Reconstruction

Ruihong Yin, Sezer Karaoglu, Theo Gevers; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 3652-3661

Abstract


In addition to color and textual information, geometry provides important cues for 3D scene reconstruction. However, current reconstruction methods only include geometry at the feature level thus not fully exploiting the geometric information. In contrast, this paper proposes a novel geometry integration mechanism for 3D scene reconstruction. Our approach incorporates 3D geometry at three levels, i.e. feature learning, feature fusion, and network supervision. First, geometry-guided feature learning encodes geometric priors to contain view-dependent information. Second, a geometry-guided adaptive feature fusion is introduced which utilizes the geometric priors as a guidance to adaptively generate weights for multiple views. Third, at the supervision level, taking the consistency between 2D and 3D normals into account, a consistent 3D normal loss is designed to add local constraints. Large-scale experiments are conducted on the ScanNet dataset, showing that volumetric methods with our geometry integration mechanism outperform state-of-the-art methods quantitatively as well as qualitatively. Volumetric methods with ours also show good generalization on the 7-Scenes and TUM RGB-D datasets.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Yin_2023_ICCV, author = {Yin, Ruihong and Karaoglu, Sezer and Gevers, Theo}, title = {Geometry-guided Feature Learning and Fusion for Indoor Scene Reconstruction}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {3652-3661} }