Unsupervised Learning of Paired Style Statistics for Unpaired Image Translation

Saeid Motiian, Quinn Jones, Stanislav Pidhorskyi, Gianfranco Doretto; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019, pp. 112-121

Abstract


Image-to-image translation has the goal of learning how to transform an input image from one domain as if it was from another domain, while preserving semantic and global information from the input. We present an image-to-image translation method that can be trained with unpaired images from source and target domains. However, we introduce a regularization that allows the model to specifically translate the local spatial statistic from one domain to another in an effort to leave unchanged gross structures and discourage translations of the semantic content. We do so by learning to generate paired images mapping the local statistic from one domain to the other. In turn, such images are used to improve the training of the translation networks, which become more focused on translating only the OstyleO of images while preserving the semantic content. Experiments on domain translation as well as domain adaptation highlight the effectiveness of our approach in comparison with the state-of-the-art.

Related Material


[pdf]
[bibtex]
@InProceedings{Motiian_2019_CVPR_Workshops,
author = {Motiian, Saeid and Jones, Quinn and Pidhorskyi, Stanislav and Doretto, Gianfranco},
title = {Unsupervised Learning of Paired Style Statistics for Unpaired Image Translation},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}