A Generative Model of People in Clothing

Christoph Lassner, Gerard Pons-Moll, Peter V. Gehler; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 853-862

Abstract


We present the first image-based generative model of people in clothing for the full body. We sidestep the commonly used complex graphics rendering pipeline and the need for high-quality 3D scans of dressed people. Instead, we learn generative models from a large image database. The main challenge is to cope with the high variance in human pose, shape and appearance. For this reason, pure image-based approaches have not been considered so far. We show that this challenge can be overcome by splitting the generating process in two parts. First, we learn to generate a semantic segmentation of the body and clothing. Second, we learn a conditional model on the resulting segments that creates realistic images. The full model is differentiable and can be conditioned on pose, shape or color. The result are samples of people in different clothing items and styles. The proposed model can generate entirely new people with realistic clothing. In several experiments we present encouraging results that suggest an entirely data-driven approach to people generation is possible.

Related Material


[pdf] [supp] [arXiv] [video]
[bibtex]
@InProceedings{Lassner_2017_ICCV,
author = {Lassner, Christoph and Pons-Moll, Gerard and Gehler, Peter V.},
title = {A Generative Model of People in Clothing},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}