One-to-one Mapping for Unpaired Image-to-image Translation

Zengming Shen, S. Kevin Zhou, Yifan Chen, Bogdan Georgescu, Xuqi Liu, Thomas Huang; The IEEE Winter Conference on Applications of Computer Vision (WACV), 2020, pp. 1170-1179

Abstract


Recently image-to-image translation has attracted significant interests in the literature, starting from the successful use of the generative adversarial network (GAN), to the introduction of cyclic constraint, to extensions to multiple domains. However, in existing approaches, there is no guarantee that the mapping between two image domains is unique or one-to-one. Here we propose a self-inverse network learning approach for unpaired image-to-image translation. Building on top of CycleGAN, we learn a self-inverse function by simply augmenting the training samples by switching inputs and outputs during training. The outcome of such learning is a proven one-to-one mapping function. Our extensive experiments on a variety of detests, including cross-modal medical image synthesis, object transfiguration, and semantic labeling, consistently demonstrate clear improvement over the CycleGAN method both qualitatively and quantitatively. Especially our proposed method reaches the state-of-the-art result on the label to photo direction of the cityscapes benchmark dataset.

Related Material


[pdf]
[bibtex]
@InProceedings{Shen_2020_WACV,
author = {Shen, Zengming and Zhou, S. Kevin and Chen, Yifan and Georgescu, Bogdan and Liu, Xuqi and Huang, Thomas},
title = {One-to-one Mapping for Unpaired Image-to-image Translation},
booktitle = {The IEEE Winter Conference on Applications of Computer Vision (WACV)},
month = {March},
year = {2020}
}