Generalized Loss-Sensitive Adversarial Learning with Manifold Margins

Marzieh Edraki, Guo-Jun Qi; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 87-102

Abstract


The classic Generative Adversarial Net and its variants can be roughly categorized into two large families: the unregularized ver- sus regularized GANs. By relaxing the non-parametric assumption on the discriminator in the classic GAN, the regularized GANs have better generalization ability to produce new samples drawn from the real dis- tribution. It is well known that the real data like natural images are not uniformly distributed over the whole data space. Instead, they are often restricted to a low-dimensional manifold of the ambient space. Such a manifold assumption suggests the distance over the manifold should be a better measure to characterize the distinct between real and fake sam- ples. Thus, we define a pullback operator to map samples back to their data manifold, and a manifold margin is defined as the distance between the pullback representations to distinguish between real and fake sam- ples and learn the optimal generators. We justify the effectiveness of the proposed model both theoretically and empirically.

Related Material


[pdf]
[bibtex]
@InProceedings{Edraki_2018_ECCV,
author = {Edraki, Marzieh and Qi, Guo-Jun},
title = {Generalized Loss-Sensitive Adversarial Learning with Manifold Margins},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}