PLA: Prompt Learning Attack against Text-to-Image Generative Models

Xinqi Lyu, Yihao Liu, Yanjie Li, Bin Xiao; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025, pp. 16851-16860

Abstract


Text-to-Image (T2I) models have gained widespread adoption across various applications. Despite the success, the potential misuse of T2I models poses significant risks of generating Not-Safe-For-Work (NSFW) content. To investigate the vulnerability of T2I models, this paper delves into adversarial attacks to bypass the safety mechanisms under black-box settings. Most previous methods rely on word substitution to search adversarial prompts. Due to limited search space, this leads to suboptimal performance compared to gradient-based training. However, black-box settings present unique challenges to training gradient-driven attack methods, since there is no access to the internal architecture and parameters of T2I models. To facilitate the learning of adversarial prompts in black-box settings, we propose a novel prompt learning attack framework (PLA), where insightful gradient-based training tailored to black-box T2I models is designed by utilizing multimodal similarities. Experiments show that our new method can effectively attack the safety mechanisms of black-box T2I models including prompt filters and post-hoc safety checkers with a high success rate compared to state-of-the-art methods. Warning: This paper may contain offensive model-generated content.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Lyu_2025_ICCV, author = {Lyu, Xinqi and Liu, Yihao and Li, Yanjie and Xiao, Bin}, title = {PLA: Prompt Learning Attack against Text-to-Image Generative Models}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {16851-16860} }