VTNFP: An Image-Based Virtual Try-On Network With Body and Clothing Feature Preservation

Ruiyun Yu, Xiaoqi Wang, Xiaohui Xie; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 10511-10520

Abstract


Image-based virtual try-on systems with the goal of transferring a desired clothing item onto the corresponding region of a person have made great strides recently, but challenges remain in generating realistic looking images that preserve both body and clothing details. Here we present a new virtual try-on network, called VTNFP, to synthesize photo-realistic images given the images of a clothed person and a target clothing item. In order to better preserve clothing and body features, VTNFP follows a three-stage design strategy. First, it transforms the target clothing into a warped form compatible with the pose of the given person. Next, it predicts a body segmentation map of the person wearing the target clothing, delineating body parts as well as clothing regions. Finally, the warped clothing, body segmentation map and given person image are fused together for fine-scale image synthesis. A key innovation of VTNFP is the body segmentation map prediction module, which provides critical information to guide image synthesis in regions where body parts and clothing intersects, and is very beneficial for preventing blurry pictures and preserving clothing and body part details. Experiments on a fashion dataset demonstrate that VTNFP generates substantially better results than state-of-the-art methods.

Related Material


[pdf]
[bibtex]
@InProceedings{Yu_2019_ICCV,
author = {Yu, Ruiyun and Wang, Xiaoqi and Xie, Xiaohui},
title = {VTNFP: An Image-Based Virtual Try-On Network With Body and Clothing Feature Preservation},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}