Minority-Focused Text-to-Image Generation via Prompt Optimization

Soobin Um, Jong Chul Ye; Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), 2025, pp. 20926-20936

Abstract


We investigate the generation of minority samples using pretrained text-to-image (T2I) latent diffusion models. Minority instances, in the context of T2I generation, can be defined as ones living on low-density regions of text-conditional data distributions. They are valuable for various applications of modern T2I generators, such as data augmentation and creative AI. Unfortunately, existing pretrained T2I diffusion models primarily focus on high-density regions, largely due to the influence of guided samplers (like CFG) that are essential for high-quality generation. To address this, we present a novel framework to counter the high-density-focus of T2I diffusion models. Specifically, we first develop an online prompt optimization framework that encourages emergence of desired properties during inference while preserving semantic contents of user-provided prompts. We subsequently tailor this generic prompt optimizer into a specialized solver that promotes generation of minority features by incorporating a carefully-crafted likelihood objective. Extensive experiments conducted across various types of T2I models demonstrate that our approach significantly enhances the capability to produce high-quality minority instances compared to existing samplers. Code is available at https://github.com/soobin-um/MinorityPrompt.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Um_2025_CVPR, author = {Um, Soobin and Ye, Jong Chul}, title = {Minority-Focused Text-to-Image Generation via Prompt Optimization}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {20926-20936} }