Semantic Image Synthesis via Adversarial Learning

Hao Dong, Simiao Yu, Chao Wu, Yike Guo; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 5706-5714


In this paper, we propose a way of synthesizing realistic images directly with natural language description, which has many useful applications, e.g.intelligent image manipulation. We attempt to accomplish such synthesis: given a source image and a target text description, our model synthesizes images to meet two requirements: 1) being realistic while matching the target text description; 2) maintaining other image features that are irrelevant to the text description. The model should be able to disentangle the semantic information from the two modalities (image and text), and generate new images from the combined semantics. To achieve this, we proposed an end-to-end neural architecture that leverages adversarial learning to automatically learn implicit loss functions, which are optimized to fulfill the aforementioned two requirements. We have evaluated our model by conducting experiments on Caltech-200 bird dataset and Oxford-102 flower dataset, and have demonstrated that our model is capable of synthesizing realistic images that match the given descriptions, while still maintain other features of original images.

Related Material

[pdf] [supp] [arXiv]
author = {Dong, Hao and Yu, Simiao and Wu, Chao and Guo, Yike},
title = {Semantic Image Synthesis via Adversarial Learning},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}