Grounded Text-to-Image Synthesis with Attention Refocusing

Quynh Phung, Songwei Ge, Jia-Bin Huang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 7932-7942

Abstract


Driven by the scalable diffusion models trained on large-scale datasets text-to-image synthesis methods have shown compelling results. However these models still fail to precisely follow the text prompt involving multiple objects attributes or spatial compositions. In this paper we reveal the potential causes of the diffusion model's cross-attention and self-attention layers. We propose two novel losses to refocus attention maps according to a given spatial layout during sampling. Creating the layouts manually requires additional effort and can be tedious. Therefore we explore using large language models (LLM) to produce these layouts for our method. We conduct extensive experiments on the DrawBench HRS and TIFA benchmarks to evaluate our proposed method. We show that our proposed attention refocusing effectively improves the controllability of existing approaches.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Phung_2024_CVPR, author = {Phung, Quynh and Ge, Songwei and Huang, Jia-Bin}, title = {Grounded Text-to-Image Synthesis with Attention Refocusing}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {7932-7942} }