ClothFlow: A Flow-Based Model for Clothed Person Generation

Xintong Han, Xiaojun Hu, Weilin Huang, Matthew R. Scott; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 10471-10480

Abstract


We present ClothFlow, an appearance-flow-based generative model to synthesize clothed person for posed-guided person image generation and virtual try-on. By estimating a dense flow between source and target clothing regions, ClothFlow effectively models the geometric changes and naturally transfers the appearance to synthesize novel images as shown in Figure 1. We achieve this with a three-stage framework: 1) Conditioned on a target pose, we first estimate a person semantic layout to provide richer guidance to the generation process. 2) Built on two feature pyramid networks, a cascaded flow estimation network then accurately estimates the appearance matching between corresponding clothing regions. The resulting dense flow warps the source image to flexibly account for deformations. 3) Finally, a generative network takes the warped clothing regions as inputs and renders the target view. We conduct extensive experiments on the DeepFashion dataset for pose-guided person image generation and on the VITON dataset for the virtual try-on task. Strong qualitative and quantitative results validate the effectiveness of our method.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Han_2019_ICCV,
author = {Han, Xintong and Hu, Xiaojun and Huang, Weilin and Scott, Matthew R.},
title = {ClothFlow: A Flow-Based Model for Clothed Person Generation},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}