Conditional Image-to-Image Translation
Jianxin Lin, Yingce Xia, Tao Qin, Zhibo Chen, Tie-Yan Liu; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 5524-5532
Abstract
Image-to-image translation tasks have been widely investigated with Generative Adversarial Networks (GANs) and dual learning. However, existing models lack the ability to control the translated results in the target domain and their results usually lack of diversity in the sense that a fixed image usually leads to (almost) deterministic translation result. In this paper, we study a new problem, conditional image-to-image translation, which is to translate an image from the source domain to the target domain conditioned on a given image in the target domain. It requires that the generated image should inherit some domain-specific features of the conditional image from the target domain. Therefore, changing the conditional image in the target domain will lead to diverse translation results for a fixed input image from the source domain, and therefore the conditional input image helps to control the translation results. We tackle this problem with unpaired data based on GANs and dual learning. We twist two conditional translation models (one translation from A domain to B domain, and the other one from B domain to A domain) together for inputs combination and reconstruction while preserving domain independent features. We carry out experiments on men's faces from-to women's faces translation and edges to shoes and bags translations. The results demonstrate the effectiveness of our proposed method.
Related Material
[pdf]
[arXiv]
[
bibtex]
@InProceedings{Lin_2018_CVPR,
author = {Lin, Jianxin and Xia, Yingce and Qin, Tao and Chen, Zhibo and Liu, Tie-Yan},
title = {Conditional Image-to-Image Translation},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}