Language-Guided Global Image Editing via Cross-Modal Cyclic Mechanism

Wentao Jiang, Ning Xu, Jiayun Wang, Chen Gao, Jing Shi, Zhe Lin, Si Liu; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 2115-2124

Abstract


Editing an image automatically via a linguistic request can significantly save laborious manual work and is friendly to photography novice. In this paper, we focus on the task of language-guided global image editing. Existing works suffer from imbalanced data distribution of real-world datasets and thus fail to understand language requests well. To handle this issue, we propose to create a cycle with our image generator by creating another model called Editing Description Network (EDNet) which predicts an editing embedding given a pair of images. Given the cycle, we propose several free augmentation strategies to help our model understand various editing requests given the imbalanced dataset. In addition, two other novel ideas are proposed: an Image-Request Attention (IRA) module which allows our method to edit an image spatial-adaptively when the image requires different editing degree at different regions, as well as a new evaluation metric for this task which is more semantic and reasonable than conventional pixel losses (eg L1). Extensive experiments on two benchmark datasets demonstrate the effectiveness of our method over existing approaches.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Jiang_2021_ICCV, author = {Jiang, Wentao and Xu, Ning and Wang, Jiayun and Gao, Chen and Shi, Jing and Lin, Zhe and Liu, Si}, title = {Language-Guided Global Image Editing via Cross-Modal Cyclic Mechanism}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {2115-2124} }