Learn a Global Appearance Semi-Supervisedly for Synthesizing Person Images

Zhipeng Ge, Fei Chen, Yu Zhou, Yao Yu, Sidan Du; The IEEE Winter Conference on Applications of Computer Vision (WACV), 2020, pp. 1190-1199

Abstract


We present a novel approach for person images synthesis in this paper, that can generate person images in arbitrary poses, shapes and views. Unlike existing methods just using keypoints' locations in heatmaps format, we propose to render SMPL model to UV maps, which can provide human structural information about poses and shapes. Thus, by varying the parameters of poses, shapes and camera in SMPL model, we can generate different person images with various attributions in a simple way, while in most cases we can only obtain new shapes of people by computer graphics methods. We train an end to end generative adversarial network with unlabeled data. As our SMPL parameters come from a pretrained model, we call our overall network semi-supervised. Our network keeps a global appearance during the fine-tuning stage of the target person, thus we can get a complete appearance of the target person, rather than the inaccurate appearance caused by inferencing without enough information. Experiments on Human3.6M Dataset and a self-collected dataset demonstrate the excellent effectiveness of our approach on person images synthesis for different applications.

Related Material


[pdf]
[bibtex]
@InProceedings{Ge_2020_WACV,
author = {Ge, Zhipeng and Chen, Fei and Zhou, Yu and Yu, Yao and Du, Sidan},
title = {Learn a Global Appearance Semi-Supervisedly for Synthesizing Person Images},
booktitle = {The IEEE Winter Conference on Applications of Computer Vision (WACV)},
month = {March},
year = {2020}
}