Semantics-Enhanced Adversarial Nets for Text-to-Image Synthesis

Hongchen Tan, Xiuping Liu, Xin Li, Yi Zhang, Baocai Yin; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 10501-10510

Abstract


This paper presents a new model, Semantics-enhanced Generative Adversarial Network (SEGAN), for fine-grained text-to-image generation. We introduce two modules, a Semantic Consistency Module (SCM) and an Attention Competition Module (ACM), to our SEGAN. The SCM incorporates image-level semantic consistency into the training of the Generative Adversarial Network (GAN), and can diversify the generated images and improve their structural coherence. A Siamese network and two types of semantic similarities are designed to map the synthesized image and the groundtruth image to nearby points in the latent semantic feature space. The ACM constructs adaptive attention weights to differentiate keywords from unimportant words, and improves the stability and accuracy of SEGAN. Extensive experiments demonstrate that our SEGAN significantly outperforms existing state-of-the-art methods in generating photo-realistic images. All source codes and models will be released for comparative study.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Tan_2019_ICCV,
author = {Tan, Hongchen and Liu, Xiuping and Li, Xin and Zhang, Yi and Yin, Baocai},
title = {Semantics-Enhanced Adversarial Nets for Text-to-Image Synthesis},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}