Powering Virtual Try-On via Auxiliary Human Segmentation Learning

Kumar Ayush, Surgan Jandial, Ayush Chopra, Balaji Krishnamurthy; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 0-0

Abstract


Image-based virtual try-on for fashion has gained considerable attention recently. This task requires to fit an in-shop cloth image on a target model image. An efficient framework for this is composed of two stages: (1) warping the try-on cloth to align with the body shape and pose of the target model, and (2) an image composition module to seamlessly integrate the warped try-on cloth onto the target model image. Existing methods suffer from artifacts and distortions in their try-on output. In this work, we propose to use auxiliary learning to power an existing state-of-the-art virtual try-on network. We leverage prediction of human semantic segmentation (of the target model wearing the try-on cloth) as an auxiliary task and show that it allows the network to better model the bounds of the clothing item and human skin, thereby producing a better fit. Using exhaustive qualitative and quantitative evaluation we show that there is a significant improvement in the preservation of characteristics of the cloth and person in the final try-on result, thereby outperforming the existing state-of-the-art virtual try-on framework.

Related Material


[pdf]
[bibtex]
@InProceedings{Ayush_2019_ICCV,
author = {Ayush, Kumar and Jandial, Surgan and Chopra, Ayush and Krishnamurthy, Balaji},
title = {Powering Virtual Try-On via Auxiliary Human Segmentation Learning},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2019}
}