-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Kang_2025_WACV, author = {Kang, Wonjun and Galim, Kevin and Koo, Hyung Il and Cho, Nam Ik}, title = {Counting Guidance for High Fidelity Text-to-Image Synthesis}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {899-908} }
Counting Guidance for High Fidelity Text-to-Image Synthesis
Abstract
Recently there have been significant improvements in the quality and performance of text-to-image generation largely due to the impressive results attained by diffusion models. However text-to-image diffusion models sometimes struggle to create high-fidelity content for the given input prompt. One specific issue is their difficulty in generating the precise number of objects specified in the text prompt. For example when provided with the prompt "five apples and ten lemons on a table" images generated by diffusion models often contain an incorrect number of objects. In this paper we present a method to improve diffusion models so that they accurately produce the correct object count based on the input prompt. We adopt a counting network that performs reference-less class-agnostic counting for any given image. We calculate the gradients of the counting network and refine the predicted noise for each step. To address the presence of multiple types of objects in the prompt we utilize novel attention map guidance to obtain high-quality masks for each object. Finally we guide the denoising process using the calculated gradients for each object. Through extensive experiments and evaluation we demonstrate that the proposed method significantly enhances the fidelity of diffusion models with respect to object count.
Related Material