Generating a Fusion Image: One's Identity and Another's Shape

DongGyu Joo, Doyeon Kim, Junmo Kim; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 1635-1643

Abstract


Generating a novel image by manipulating two input images is an interesting research problem in the study of generative adversarial networks (GANs). We propose a new GAN-based network that generates a fusion image with the identity of input image x and the shape of input image y. Our network can simultaneously train on more than two image datasets in an unsupervised manner. We define an identity loss LI to catch the identity of image x and a shape loss LS to get the shape of y. In addition, we propose a novel training method called Min-Patch training to focus the generator on crucial parts of an image, rather than its entirety. We show qualitative results on the VGG Youtube Pose dataset , Eye dataset (MPIIGaze and UnityEyes), and the Photo–Sketch–Cartoon dataset.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Joo_2018_CVPR,
author = {Joo, DongGyu and Kim, Doyeon and Kim, Junmo},
title = {Generating a Fusion Image: One's Identity and Another's Shape},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}