-
[pdf]
[supp]
[bibtex]@InProceedings{Shi_2021_CVPR, author = {Shi, Jing and Xu, Ning and Xu, Yihang and Bui, Trung and Dernoncourt, Franck and Xu, Chenliang}, title = {Learning by Planning: Language-Guided Global Image Editing}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {13590-13599} }
Learning by Planning: Language-Guided Global Image Editing
Abstract
Recently, language-guided global image editing draws increasing attention with growing application potentials. However, previous GAN-based methods are not only confined to domain-specific, low-resolution data but also lacking in interpretability. To overcome the collective difficulties, we develop a text-to-operation model to map the vague editing language request into a series of editing operations, e.g., change contrast, brightness, and saturation. Each operation is interpretable and differentiable. Furthermore, the only supervision in the task is the target image, which is insufficient for a stable training of sequential decisions. Hence, we propose a novel operation planning algorithm to generate possible editing sequences from the target image as pseudo ground truth. Comparison experiments on the newly collected MA5k-Req dataset and GIER dataset show the advantages of our methods. Code is available at https://github.com/jshi31/T2ONet.
Related Material