TraVeLGAN: Image-To-Image Translation by Transformation Vector Learning

Matthew Amodio, Smita Krishnaswamy; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 8983-8992


Interest in image-to-image translation has grown substantially in recent years with the success of unsupervised models based on the cycle-consistency assumption. The achievements of these models have been limited to a particular subset of domains where this assumption yields good results, namely homogeneous domains that are characterized by style or texture differences. We tackle the challenging problem of image-to-image translation where the domains are defined by high-level shapes and contexts, as well as including significant clutter and heterogeneity. For this purpose, we introduce a novel GAN based on preserving intra-domain vector transformations in a latent space learned by a siamese network. The traditional GAN system introduced a discriminator network to guide the generator into generating images in the target domain. To this two-network system we add a third: a siamese network that guides the generator so that each original image shares semantics with its generated version. With this new three-network system, we no longer need to constrain the generators with the ubiquitous cycle-consistency restraint or any other autoencoding regularization. As a result, the generators can learn mappings between more complex domains that differ from each other by more than just style or texture. We demonstrate our model by mapping between high-resolution, arbitrarily chosen classes from the Imagenet dataset completely without pre-processing such as cropping, centering, or filtering unrepresentative images.

Related Material

[pdf] [supp]
author = {Amodio, Matthew and Krishnaswamy, Smita},
title = {TraVeLGAN: Image-To-Image Translation by Transformation Vector Learning},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}