PQ-NET: A Generative Part Seq2Seq Network for 3D Shapes

Rundi Wu, Yixin Zhuang, Kai Xu, Hao Zhang, Baoquan Chen; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 829-838

Abstract


We introduce PQ-NET, a deep neural network which represents and generates 3D shapes via sequential part assembly. The input to our network is a 3D shape segmented into parts, where each part is first encoded into a feature representation using a part autoencoder. The core component of PQ-NET is a sequence-to-sequence or Seq2Seq autoencoder which encodes a sequence of part features into a latent vector of fixed size, and the decoder reconstructs the 3D shape, one part at a time, resulting in a sequential assembly. The latent space formed by the Seq2Seq encoder encodes both part structure and fine part geometry. The decoder can be adapted to perform several generative tasks including shape autoencoding, interpolation, novel shape generation, and single-view 3D reconstruction, where the generated shapes are all composed of meaningful parts.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Wu_2020_CVPR,
author = {Wu, Rundi and Zhuang, Yixin and Xu, Kai and Zhang, Hao and Chen, Baoquan},
title = {PQ-NET: A Generative Part Seq2Seq Network for 3D Shapes},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}