VolumeFusion: Deep Depth Fusion for 3D Scene Reconstruction

Jaesung Choe, Sunghoon Im, Francois Rameau, Minjun Kang, In So Kweon; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 16086-16095

Abstract


To reconstruct a 3D scene from a set of calibrated views, traditional multi-view stereo techniques rely on two distinct stages: local depth maps computation and global depth maps fusion. Recent studies concentrate on deep neural architectures for depth estimation by using conventional depth fusion method or direct 3D reconstruction network by regressing Truncated Signed Distance Function (TSDF). In this paper, we advocate that replicating the traditional two stages framework with deep neural networks improves both the interpretability and the accuracy of the results. As mentioned, our network operates in two steps: 1) the local computation of the local depth maps with a deep MVS technique, and, 2) the depth maps and images' features fusion to build a single TSDF volume. In order to improve the matching performance between images acquired from very different viewpoints (e.g., large-baseline and rotations), we introduce a rotation-invariant 3D convolution kernel called PosedConv. The effectiveness of the proposed architecture is underlined via a large series of experiments conducted on the ScanNet dataset where our approach compares favorably against both traditional and deep learning techniques.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Choe_2021_ICCV, author = {Choe, Jaesung and Im, Sunghoon and Rameau, Francois and Kang, Minjun and Kweon, In So}, title = {VolumeFusion: Deep Depth Fusion for 3D Scene Reconstruction}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {16086-16095} }