Fast and Flexible Indoor Scene Synthesis via Deep Convolutional Generative Models

Daniel Ritchie, Kai Wang, Yu-An Lin; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 6182-6190

Abstract


We present a new, fast and flexible pipeline for indoor scene synthesis that is based on deep convolutional generative models. Our method operates on a top-down image-based representation, and inserts objects iteratively into the scene by predict their category, location, orientation and size with separate neural network modules. Our pipeline naturally supports automatic completion of partial scenes, as well as synthesis of complete scenes, without any modifications. Our method is significantly faster than the previous image-based method, and generates results that outperforms it and other state-of-the-art deep generative scene models in terms of faithfulness to training data and perceived visual quality.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Ritchie_2019_CVPR,
author = {Ritchie, Daniel and Wang, Kai and Lin, Yu-An},
title = {Fast and Flexible Indoor Scene Synthesis via Deep Convolutional Generative Models},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}