Multi-layer Depth and Epipolar Feature Transformers for 3D Scene Reconstruction

Daeyun Shin, Zhile Ren, Erik B. Sudderth, Charless C. Fowlkes; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019, pp. 39-43

Abstract


We tackle the problem of automatically reconstructing a complete 3D model of a scene from a single RGB image. This challenging task requires inferring the shape of both visible and occluded surfaces. Our approach utilizes viewer-centered, multi-layer representation of scene geometry adapted from recent methods for single object shape completion. To improve the accuracy of view-centered representations for complex scenes, we introduce a novel "Epipolar Feature Transformer" that transfers convolutional network features from an input view to other virtual camera viewpoints, and thus better covers the 3D scene geometry. Unlike existing approaches that first detect and localize objects in 3D, and then infer object shape using category-specific models, our approach is fully convolutional, end-to-end differentiable, and avoids the resolution and memory limitations of voxel representations. We demonstrate the advantages of multi-layer depth representations and epipolar feature transformers on the reconstruction of a large database of indoor scenes.

Related Material


[pdf]
[bibtex]
@InProceedings{Shin_2019_CVPR_Workshops,
author = {Shin, Daeyun and Ren, Zhile and Sudderth, Erik B. and Fowlkes, Charless C.},
title = {Multi-layer Depth and Epipolar Feature Transformers for 3D Scene Reconstruction},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}