Reversible GANs for Memory-Efficient Image-To-Image Translation
Tycho F.A. van der Ouderaa, Daniel E. Worrall; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 4720-4728
Abstract
The pix2pix and CycleGAN losses have vastly improved the qualitative and quantitative visual quality of results in image-to-image translation tasks. We extend this framework by exploring approximately invertible architectures which are well suited to these losses. These architectures are approximately invertible by design and thus partially satisfy cycle-consistency before training even begins. Furthermore, since invertible architectures have constant memory complexity in depth, these models can be built arbitrarily deep. We are able to demonstrate superior quantitative output on the Cityscapes and Maps datasets at near constant memory budget.
Related Material
[pdf]
[supp]
[
bibtex]
@InProceedings{Ouderaa_2019_CVPR,
author = {Ouderaa, Tycho F.A. van der and Worrall, Daniel E.},
title = {Reversible GANs for Memory-Efficient Image-To-Image Translation},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}