RoomNet: End-To-End Room Layout Estimation

Chen-Yu Lee, Vijay Badrinarayanan, Tomasz Malisiewicz, Andrew Rabinovich; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 4865-4874

Abstract


This paper focuses on the task of room layout estimation from a monocular RGB image. Prior works break the problem into two sub-tasks: semantic segmentation of floor, walls, ceiling to produce layout hypotheses, followed by an iterative optimization step to rank these hypotheses. In contrast, we adopt a more direct formulation of this problem as one of estimating an ordered set of room layout keypoints. The room layout and the corresponding segmentation is completely specified given the locations of these ordered keypoints. We predict the locations of the room layout keypoints using RoomNet, an end-to-end trainable encoder-decoder network. On the challenging benchmark datasets Hedau and LSUN, we achieve state-of-the-art performance along with 200x to 600x speedup compared to the most recent work. Additionally, we present optional extensions to the RoomNet architecture such as including recurrent computations and memory units to refine the keypoint locations under the same parametric capacity.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Lee_2017_ICCV,
author = {Lee, Chen-Yu and Badrinarayanan, Vijay and Malisiewicz, Tomasz and Rabinovich, Andrew},
title = {RoomNet: End-To-End Room Layout Estimation},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}