Mix and Match Networks: Encoder-Decoder Alignment for Zero-Pair Image Translation
Yaxing Wang, Joost van de Weijer, Luis Herranz; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 5467-5476
Abstract
We address the problem of image translation between domains or modalities for which no direct paired data is available (i.e. zero-pair translation). We propose mix and match networks, based on multiple encoders and decoders aligned in such a way that other encoder-decoder pairs can be composed at test time to perform unseen image translation tasks between domains or modalities for which explicit paired samples were not seen during training. We study the impact of autoencoders, side information and losses in improving the alignment and transferability of trained pairwise translation models to unseen translations. We show our approach is scalable and can perform colorization and style transfer between unseen combinations of domains. We evaluate our system in a challenging cross-modal setting where semantic segmentation is estimated from depth images, without explicit access to any depth-semantic segmentation training pairs. Our model outperforms baselines based on pix2pix and CycleGAN models.
Related Material
[pdf]
[supp]
[arXiv]
[
bibtex]
@InProceedings{Wang_2018_CVPR,
author = {Wang, Yaxing and van de Weijer, Joost and Herranz, Luis},
title = {Mix and Match Networks: Encoder-Decoder Alignment for Zero-Pair Image Translation},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}