Unconstrained Scene Generation With Locally Conditioned Radiance Fields

Terrance DeVries, Miguel Angel Bautista, Nitish Srivastava, Graham W. Taylor, Joshua M. Susskind; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 14304-14313

Abstract


We tackle the challenge of learning a distribution over complex, realistic, indoor scenes. In this paper, we introduce Generative Scene Networks (GSN), which learns to decompose scenes into a collection of many local radiance fields that can be rendered from a free moving camera. Our model can be used as a prior to generate new scenes, or to complete a scene given only sparse 2D observations. Recent work has shown that generative models of radiance fields can capture properties such as multi-view consistency and view-dependent lighting. However, these models are specialized for constrained viewing of single objects, such as cars or faces. Due to the size and complexity of realistic indoor environments, existing models lack the representational capacity to adequately capture them. Our decomposition scheme scales to larger and more complex scenes while preserving details and diversity, and the learned prior enables high-quality rendering from view-points that are significantly different from observed viewpoints. When compared to existing models, GSN produces quantitatively higher quality scene renderings across several different scene datasets.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{DeVries_2021_ICCV, author = {DeVries, Terrance and Bautista, Miguel Angel and Srivastava, Nitish and Taylor, Graham W. and Susskind, Joshua M.}, title = {Unconstrained Scene Generation With Locally Conditioned Radiance Fields}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {14304-14313} }