SliceNet: Deep Dense Depth Estimation From a Single Indoor Panorama Using a Slice-Based Representation

Giovanni Pintore, Marco Agus, Eva Almansa, Jens Schneider, Enrico Gobbetti; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 11536-11545

Abstract


We introduce a novel deep neural network to estimate a depth map from a single monocular indoor panorama. The network directly works on the equirectangular projection, exploiting the properties of indoor 360 images. Starting from the fact that gravity plays an important role in the design and construction of man-made indoor scenes, we propose a compact representation of the scene into vertical slices of the sphere, and we exploit long- and short-term relationships among slices to recover the equirectangular depth map. Our design makes it possible to maintain high-resolution information in the extracted features even with a deep network. The experimental results demonstrate that our method outperforms current state-of-the-art solutions in prediction accuracy, particularly for real-world data.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Pintore_2021_CVPR, author = {Pintore, Giovanni and Agus, Marco and Almansa, Eva and Schneider, Jens and Gobbetti, Enrico}, title = {SliceNet: Deep Dense Depth Estimation From a Single Indoor Panorama Using a Slice-Based Representation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {11536-11545} }