3SGAN: 3D Shape Embedded Generative Adversarial Networks

Fengdi Che, Xiru Zhu, Tianzi Yang, Tzu-yu Yang; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 0-0

Abstract


Despite recent advances in Generative Adversarial Models(GAN) for image generation, significant gaps remain concerning the generation of boundary and spatial structure. In this paper, we propose a new approach to generate edge and depth information combined with an RGB image to solve this problem. More specifically, we propose two new regularization models. Our first model enforces image-depth-edge alignments by controlling the second-order derivative of depth and the first-order derivative of RGB maps, enforcing smoothness and consistency. The second model leverages multiview synthesis to regularize RGB and depth by computing the difference between an expected rotated object compared to a conditionally generated view of the object; enforcing projection consistency enables the model to directly learn spatial structures and depths. To evaluate our approach, we generated an RGB-D dataset with edge contours from ShapeNet models. Furthermore, we utilized an existing RGB-D dataset, NYU Depth V2 with edges learned by the Holistically-nested Edge Detection model.

Related Material


[pdf]
[bibtex]
@InProceedings{Che_2019_ICCV,
author = {Che, Fengdi and Zhu, Xiru and Yang, Tianzi and Yang, Tzu-yu},
title = {3SGAN: 3D Shape Embedded Generative Adversarial Networks},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2019}
}