Text Guided Person Image Synthesis

Xingran Zhou, Siyu Huang, Bin Li, Yingming Li, Jiachen Li, Zhongfei Zhang; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 3663-3672

Abstract


This paper presents a novel method to manipulate the visual appearance (pose and attribute) of a person image according to natural language descriptions. Our method can be boiled down to two stages: 1) text guided pose generation and 2) visual appearance transferred image synthesis. In the first stage, our method infers a reasonable target human pose based on the text. In the second stage, our method synthesizes a realistic and appearance transferred person image according to the text in conjunction with the target pose. Our method extracts sufficient information from the text and establishes a mapping between the image space and the language space, making generating and editing images corresponding to the description possible. We conduct extensive experiments to reveal the effectiveness of our method, as well as using the VQA Perceptual Score as a metric for evaluating the method. It shows for the first time that we can automatically edit the person image from the natural language descriptions.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Zhou_2019_CVPR,
author = {Zhou, Xingran and Huang, Siyu and Li, Bin and Li, Yingming and Li, Jiachen and Zhang, Zhongfei},
title = {Text Guided Person Image Synthesis},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}