-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Zhang_2025_WACV, author = {Zhang, Jianyi and Zhou, Yufan and Gu, Jiuxiang and Wigington, Curtis and Yu, Tong and Chen, Yiran and Sun, Tong and Zhang, Ruiyi}, title = {ARTIST: Improving the Generation of Text-Rich Images with Disentangled Diffusion Models and Large Language Models}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {1268-1278} }
ARTIST: Improving the Generation of Text-Rich Images with Disentangled Diffusion Models and Large Language Models
Abstract
Diffusion models have demonstrated exceptional capabilities in generating a broad spectrum of visual content yet their proficiency in rendering text is still limited: they often generate inaccurate characters or words that fail to blend well with the underlying image. To address these shortcomings we introduce a novel framework named ARTIST which incorporates a dedicated textual diffusion model to focus on the learning of text structures specifically. Initially we pretrain this textual model to capture the intricacies of text representation. Subsequently we finetune a visual diffusion model enabling it to assimilate textual structure information from the pretrained textual model. This disentangled architecture design and training strategy significantly enhance the text rendering ability of the diffusion models for text-rich image generation. Additionally we leverage the capabilities of pretrained large language models to interpret user intentions better contributing to improved generation quality. Empirical results on the MARIO-Eval benchmark underscore the effectiveness of the proposed method showing an improvement of up to 15% in various metrics.
Related Material