Segmentation-Free Guidance for Text-to-Image Diffusion Models

Kambiz Azarian, Debasmit Das, Qiqi Hou, Fatih Porikli; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 7520-7529

Abstract


We introduce segmentation-free guidance a novel method designed for text-to-image diffusion models like Stable Diffusion. Our method does not require retraining of the diffusion model. At no additional compute cost it uses the diffusion model itself as an implied segmentation network hence named segmentation-free guidance to dynamically adjust the negative prompt for each patch of the generated image based on the patch's relevance to concepts in the prompt. We evaluate segmentation-free guidance both objectively using FID CLIP IS and PickScore and subjectively through human evaluators. For the subjective evaluation we also propose a methodology for subsampling the prompts in a dataset like MS COCO-30K to keep the number of human evaluations manageable while ensuring that the selected subset is both representative in terms of content and fair in terms of model performance. The results demonstrate the superiority of our segmentation-free guidance to the widely used classifier-free method. Human evaluators preferred segmentation-free guidance over classifier-free 60% to 19% with 18% of occasions showing a strong preference. Additionally PickScore win-rate a recently proposed metric mimicking human preference also indicates a preference for our method over classifier-free.

Related Material


[pdf]
[bibtex]
@InProceedings{Azarian_2024_CVPR, author = {Azarian, Kambiz and Das, Debasmit and Hou, Qiqi and Porikli, Fatih}, title = {Segmentation-Free Guidance for Text-to-Image Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {7520-7529} }