3D-PRNN: Generating Shape Primitives With Recurrent Neural Networks

Chuhang Zou, Ersin Yumer, Jimei Yang, Duygu Ceylan, Derek Hoiem; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 900-909

Abstract


The success of various applications including robotics, digital content creation, and visualization demand a structured and abstract representation of the 3D world from limited sensor data. Inspired by the nature of human perception of 3D shapes as a collection of simple parts, we explore such an abstract shape representation based on primitives. Given a single depth image of an object, we present 3D-PRNN, a generative recurrent neural network that synthesizes multiple plausible shapes composed of a set of primitives. Our generative model encodes symmetry characteristics of common man-made objects, preserves long-range structural coherence, and describes objects of varying complexity with a compact representation. We also propose a method based on Gaussian Fields to generate a large scale dataset of primitive-based shape representations to train our network. We evaluate our approach on a wide range of examples and show that it outperforms nearest-neighbor based shape retrieval methods and is on-par with voxel-based generative models while using a significantly reduced parameter space.

Related Material


[pdf]
[bibtex]
@InProceedings{Zou_2017_ICCV,
author = {Zou, Chuhang and Yumer, Ersin and Yang, Jimei and Ceylan, Duygu and Hoiem, Derek},
title = {3D-PRNN: Generating Shape Primitives With Recurrent Neural Networks},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}