Dynamic Prompt Optimizing for Text-to-Image Generation

Wenyi Mo, Tianyu Zhang, Yalong Bai, Bing Su, Ji-Rong Wen, Qing Yang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 26627-26636

Abstract


Text-to-image generative models specifically those based on diffusion models like Imagen and Stable Diffusion have made substantial advancements. Recently there has been a surge of interest in the delicate refinement of text prompts. Users assign weights or alter the injection time steps of certain words in the text prompts to improve the quality of generated images. However the success of fine-control prompts depends on the accuracy of the text prompts and the careful selection of weights and time steps which requires significant manual intervention. To address this we introduce the Prompt Auto-Editing (PAE) method. Besides refining the original prompts for image generation we further employ an online reinforcement learning strategy to explore the weights and injection time steps of each word leading to the dynamic fine-control prompts. The reward function during training encourages the model to consider aesthetic score semantic consistency and user preferences. Experimental results demonstrate that our proposed method effectively improves the original prompts generating visually more appealing images while maintaining semantic alignment. Code is available at \href https://github.com/Mowenyii/PAE this https URL .

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Mo_2024_CVPR, author = {Mo, Wenyi and Zhang, Tianyu and Bai, Yalong and Su, Bing and Wen, Ji-Rong and Yang, Qing}, title = {Dynamic Prompt Optimizing for Text-to-Image Generation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {26627-26636} }