-
[pdf]
[arXiv]
[bibtex]@InProceedings{Agarwal_2025_WACV, author = {Agarwal, Aishwarya and Karanam, Srikrishna and Srinivasan, Balaji Vasan}, title = {AlignIT: Enhancing Prompt Alignment in Customization of Text-to-Image Models}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {4882-4890} }
AlignIT: Enhancing Prompt Alignment in Customization of Text-to-Image Models
Abstract
We consider the problem of customizing text-to-image diffusion models with user-supplied reference images. Given new prompts the existing methods can capture the key concept from the reference images but fail to align the generated image with the prompt. In this work we seek to address this key issue by proposing new methods that can easily be used in conjunction with existing customization methods that optimize the embeddings and weights at various intermediate stages of the text encoding process before being fed into the noise prediction model of a text-to-image diffusion model. The first contribution of this paper is a dissection of the various stages of the text encoding process leading up to the conditioning vector for text-to-image models. We take a holistic view of existing customization methods and notice that key and value outputs from this process differs substantially from their corresponding baseline (non-customized) models. While this difference does not impact the concept being customized it leads to other parts of the generated image not being aligned with the prompt. Further we also observe that these keys and values allow independent control various aspects of the final generation enabling semantic manipulation of the output. Taken together the features spanning these keys and values serve as the basis for our next contribution where we fix the aforementioned issues with existing methods. We propose a new post-processing algorithm AlignIT that infuses the keys and values for the concept of interest while ensuring the keys and values for all other tokens in the input prompt are unchanged. Our proposed method can be plugged in directly to existing customization methods leading to a substantial performance improvement in the alignment of the final result with the input prompt while retaining the customization quality. We conduct extensive experiments across various different customization methods and a wide variety of reference images and show consistent improvements both qualitatively and quantitatively.
Related Material