CVAE-GAN: Fine-Grained Image Generation Through Asymmetric Training

Jianmin Bao, Dong Chen, Fang Wen, Houqiang Li, Gang Hua; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2745-2754

Abstract


We present variational generative adversarial networks, a general learning framework that combines a variational auto-encoder with a generative adversarial network, for synthesizing images in fine-grained categories, such as faces of a specific person or objects in a category. Our approach models an image as a composition of label and latent attributes in a probabilistic model. By varying the fine-grained category label fed into the resulting generative model, we can generate images in a specific category with randomly drawn values on a latent attribute vector. Our approach has two novel aspects. First, we adopt a cross entropy loss for the discriminative and classifier network, but a mean discrepancy objective for the generative network. This kind of asymmetric loss function makes the GAN training more stable. Second, we adopt an encoder network to learn the relationship between the latent space and the real image space, and use pairwise feature matching to keep the structure of generated images. We experiment with natural images of faces, flowers, and birds, and demonstrate that the proposed models are capable of generating realistic and diverse samples with fine-grained category labels. We further show that our models can be applied to other tasks, such as image inpainting, super-resolution, and data augmentation for training better face recognition models.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Bao_2017_ICCV,
author = {Bao, Jianmin and Chen, Dong and Wen, Fang and Li, Houqiang and Hua, Gang},
title = {CVAE-GAN: Fine-Grained Image Generation Through Asymmetric Training},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}