Feedback Adversarial Learning: Spatial Feedback for Improving Generative Adversarial Networks

Minyoung Huh, Shao-Hua Sun, Ning Zhang; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 1476-1485

Abstract


We propose feedback adversarial learning (FAL) framework that can improve existing generative adversarial networks by leveraging spatial feedback from the discriminator. We formulate the generation task as a recurrent framework, in which the discriminator's feedback is integrated into the feedforward path of the generation process. Specifically, the generator conditions on the discriminator's spatial output response, and its previous generation to improve generation quality over time - allowing the generator to attend and fix its previous mistakes. To effectively utilize the feedback, we propose an adaptive spatial transform layer, which learns to spatially modulate feature maps from its previous generation and the error signal from the discriminator. We demonstrate that one can easily adapt FAL to existing adversarial learning frameworks on a wide range of tasks, including image generation, image-to-image translation, and voxel generation.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Huh_2019_CVPR,
author = {Huh, Minyoung and Sun, Shao-Hua and Zhang, Ning},
title = {Feedback Adversarial Learning: Spatial Feedback for Improving Generative Adversarial Networks},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}