Sym-Parameterized Dynamic Inference for Mixed-Domain Image Translation

Simyung Chang, SeongUk Park, John Yang, Nojun Kwak; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 4803-4811

Abstract


Recent advances in image-to-image translation have led to some ways to generate multiple domain images through a single network. However, there is still a limit in creating an image of a target domain without a dataset on it. We propose a method to expand the concept of `multi-domain' from data to the loss area, and to combine the characteristics of each domain to create an image. First, we introduce a sym-parameter and its learning method that can mix various losses and can synchronize them with input conditions. Then, we propose Sym-parameterized Generative Network (SGN) using it. Through experiments, we confirmed that SGN could mix the characteristics of various data and losses, and it is possible to translate images to any mixed-domain without ground truths, such as 30% Van Gogh and 20% Monet and 40% snowy.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Chang_2019_ICCV,
author = {Chang, Simyung and Park, SeongUk and Yang, John and Kwak, Nojun},
title = {Sym-Parameterized Dynamic Inference for Mixed-Domain Image Translation},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}