3DHumanGAN: 3D-Aware Human Image Generation with 3D Pose Mapping

Zhuoqian Yang, Shikai Li, Wayne Wu, Bo Dai; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 23008-23019

Abstract


We present 3DHumanGAN, a 3D-aware generative adversarial network that synthesizes photorealistic images of full-body humans with consistent appearances under different view-angles and body-poses. To tackle the representational and computational challenges in synthesizing the articulated structure of human bodies, we propose a novel generator architecture in which a 2D convolutional backbone is modulated by a 3D pose mapping network. The 3D pose mapping network is formulated as a renderable implicit function conditioned on a posed 3D human mesh. This design has several merits: i) it leverages the strength of 2D GANs to produce high-quality images; ii) it generates consistent images under varying view-angles and poses; iii) the model can incorporate the 3D human prior and enable pose conditioning. Project page: https://3dhumangan.github.io/.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Yang_2023_ICCV, author = {Yang, Zhuoqian and Li, Shikai and Wu, Wayne and Dai, Bo}, title = {3DHumanGAN: 3D-Aware Human Image Generation with 3D Pose Mapping}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {23008-23019} }