DivCo: Diverse Conditional Image Synthesis via Contrastive Generative Adversarial Network

Rui Liu, Yixiao Ge, Ching Lam Choi, Xiaogang Wang, Hongsheng Li; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 16377-16386

Abstract


Conditional generative adversarial networks (cGANs) target at synthesizing diverse images given the input conditions and latent codes, but unfortunately, they usually suffer from the issue of mode collapse. Towards solving this issue, previous works mainly focused on encouraging the correlation between the latent codes and the generated images, while ignoring the relations between images generated from various latent codes. The recent MSGAN tried to encourage the diversity of the generated image but still only considers "negative" relations between the image pairs. In this paper, we propose a novel DivCo framework to properly constrain both "positive" and "negative" relations between the generated images specified in the latent space. To the best of our knowledge, this is the first attempt to use contrastive learning for diverse conditional image synthesis. A latent-augmented contrastive loss is introduced, which encourage images generated from adjacent latent codes to be similar and those generated from distinct latent codes to show low affinities. The proposed latent-augmented contrastive loss are well compatible with various cGAN architectures. Extensive experiments demonstrate the proposed DivCo could produce more diverse images than state-of-the-art methods without sacrificing visual quality in multiple settings.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Liu_2021_CVPR, author = {Liu, Rui and Ge, Yixiao and Choi, Ching Lam and Wang, Xiaogang and Li, Hongsheng}, title = {DivCo: Diverse Conditional Image Synthesis via Contrastive Generative Adversarial Network}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {16377-16386} }