Improving Shape Deformation in Unsupervised Image-to-Image Translation
Aaron Gokaslan, Vivek Ramanujan, Daniel Ritchie, Kwang In Kim, James Tompkin; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 649-665
Abstract
Unsupervised image-to-image translation techniques are able to map local texture between two domains, but they are typically un- successful when the domains require larger shape change. Inspired by semantic segmentation, we introduce a discriminator with dilated convo- lutions which is able to use information from across the entire image to train a more context-aware generator. This is coupled with a multi-scale perceptual loss which is better able to represent error in the underly- ing shape of objects. We demonstrate that this design is more capable of representing shape deformation in a challenging toy dataset, plus in complex mappings with significant dataset variation between humans, dolls, and anime faces, and between cats and dogs.
Related Material
[pdf]
[arXiv]
[
bibtex]
@InProceedings{Gokaslan_2018_ECCV,
author = {Gokaslan, Aaron and Ramanujan, Vivek and Ritchie, Daniel and Kim, Kwang In and Tompkin, James},
title = {Improving Shape Deformation in Unsupervised Image-to-Image Translation},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}