Indirect Adversarial Losses via an Intermediate Distribution for Training GANs

Rui Yang, Duc Minh Vo, Hideki Nakayama; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 4652-4661

Abstract


In this study, we consider the weak convergence characteristics of the Integral Probability Metrics (IPM) methods in training Generative Adversarial Networks (GANs). We first concentrate on a successful IPM-based GAN method that employs a repulsive version of the Maximum Mean Discrepancy (MMD) as the discriminator loss (called repulsive MMD-GAN). We reinterpret its repulsive metrics as an indirect discriminator loss function toward an intermediate distribution. This allows us to propose a novel generator loss via such an intermediate distribution based on our reinterpretation. Our indirect adversarial losses use a simple known distribution (i.e., the Normal or Uniform distribution in our experiments) to simulate indirect adversarial learning between three parts -- real, fake, and intermediate distributions. Furthermore, we found the Kernelized Stein Discrepancy (KSD) from the IPM family as the adversarial loss function to avoid randomness from intermediate distribution samples because the target side (intermediate one) is sample-free in KSD. Experiments on several real-world datasets show that our methods can successfully train GANs with the intermediate-distribution-based KSD and MMD and can outperform previous loss metrics.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Yang_2023_WACV, author = {Yang, Rui and Vo, Duc Minh and Nakayama, Hideki}, title = {Indirect Adversarial Losses via an Intermediate Distribution for Training GANs}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2023}, pages = {4652-4661} }