Recovering 3D Planes from a Single Image via Convolutional Neural Networks

Fengting Yang, Zihan Zhou; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 85-100

Abstract


In this paper, we study the problem of recovering 3D planar surfaces from a single image of man-made environment. We show that it is possible to directly train a deep neural network to achieve this goal. A novel plane structure-induced loss is proposed to train the network to simultaneously predict a plane segmentation map and the parameters of the 3D planes. Further, to avoid the tedious manual labeling process, we show how to leverage existing large-scale RGB-D dataset to train our network without explicit 3D plane annotations, and how to take advantage of the semantic labels come with the dataset for accurate planar and non-planar classification. Experiment results demonstrate that our method significantly outperforms existing methods, both qualitatively and quantitatively. The recovered planes could potentially benefit many important visual tasks such as vision-based navigation and human-robot interaction.

Related Material


[pdf]
[bibtex]
@InProceedings{Yang_2018_ECCV,
author = {Yang, Fengting and Zhou, Zihan},
title = {Recovering 3D Planes from a Single Image via Convolutional Neural Networks},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}