The Conditional Analogy GAN: Swapping Fashion Articles on People Images

Nikolay Jetchev, Urs Bergmann; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2287-2292

Abstract


We present a novel method to solve image analogy problems: it allows to learn the relation between paired images present in training data, and then generalize and generate images that correspond to the relation, but were never seen in the training set. Therefore, we call the method Conditional Analogy Generative Adversarial Network (CAGAN), as it is based on adversarial training and employs deep convolutional neural networks. An especially interesting application of that technique is automatic swapping of clothing on fashion model photos. Our work has the following contributions. First, the definition of the end-to-end trainable CAGAN architecture, which implicitly learns segmentation masks without expensive supervised labeling data. Second, experimental results show plausible segmentation masks and often convincing swapped images, given the target article. Finally, we discuss the next steps for that technique: neural network architecture improvements and more advanced applications.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Jetchev_2017_ICCV,
author = {Jetchev, Nikolay and Bergmann, Urs},
title = {The Conditional Analogy GAN: Swapping Fashion Articles on People Images},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2017}
}