Unsupervised Image-to-Image Translation with Stacked Cycle-Consistent Adversarial Networks

Minjun Li, Haozhi Huang, Lin Ma, Wei Liu, Tong Zhang, Yugang Jiang; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 184-199

Abstract


Recent studies on unsupervised image-to-image translation have made remarkable progress by training a pair of generative adversarial networks with a cycle-consistent loss. However, such unsupervised methods may generate inferior results when the image resolution is high or the two image domains are of significant appearance differences, such as the translations between semantic layouts and natural images in the Cityscapes dataset. In this paper, we propose novel Stacked Cycle-Consistent Adversarial Networks (SCANs) by decomposing a single translation into multi-stage transformations, which not only boost the image translation quality but also enable higher resolution image-toimage translation in a coarse-to-fine fashion. Moreover, to properly exploit the information from the previous stage, an adaptive fusion block is devised to learn a dynamic integration of the current stage’s output and the previous stage’s output. Experiments on multiple datasets demonstrate that our proposed approach can improve the translation quality compared with previous single-stage unsupervised methods.

Related Material


[pdf]
[bibtex]
@InProceedings{Li_2018_ECCV,
author = {Li, Minjun and Huang, Haozhi and Ma, Lin and Liu, Wei and Zhang, Tong and Jiang, Yugang},
title = {Unsupervised Image-to-Image Translation with Stacked Cycle-Consistent Adversarial Networks},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}