Shape Inpainting Using 3D Generative Adversarial Network and Recurrent Convolutional Networks

Weiyue Wang, Qiangui Huang, Suya You, Chao Yang, Ulrich Neumann; The IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2298-2306

Abstract


Recent advances in convolutional neural networks have shown promising results in 3D shape completion. But due to GPU memory limitations, these methods can only produce low-resolution outputs. To inpaint 3D models with semantic plausibility and contextual details, we introduce a hybrid framework that combines a 3D Encoder-Decoder Generative Adversarial Network (3D-ED-GAN) and a Long-term Recurrent Convolutional Network (LRCN). The 3D-ED-GAN is a 3D convolutional neural network trained with a generative adversarial paradigm to fill missing 3D data in low-resolution. LRCN adopts a recurrent neural network architecture to minimize GPU memory usage and incorporates an Encoder-Decoder pair into a Long Short-term Memory Network. By handling the 3D model as a sequence of 2D slices, LRCN transforms a coarse 3D shape into a more complete and higher resolution volume. While 3D-ED-GAN captures global contextual structure of the 3D shape, LRCN localizes the fine-grained details. Experimental results on both real-world and synthetic data show reconstructions from corrupted models result in complete and high-resolution 3D objects.

Related Material


[pdf]
[bibtex]
@InProceedings{Wang_2017_ICCV,
author = {Wang, Weiyue and Huang, Qiangui and You, Suya and Yang, Chao and Neumann, Ulrich},
title = {Shape Inpainting Using 3D Generative Adversarial Network and Recurrent Convolutional Networks},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}