MagicFusion: Boosting Text-to-Image Generation Performance by Fusing Diffusion Models

Jing Zhao, Heliang Zheng, Chaoyue Wang, Long Lan, Wenjing Yang; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 22592-22602

Abstract


The advent of open-source AI communities has produced a cornucopia of powerful text-guided diffusion models that are trained on various datasets. While few explorations have been conducted on ensembling such models to combine their strengths. In this work, we propose a simple yet effective method called Saliency-aware Noise Blending (SNB) that can empower the fused text-guided diffusion models to achieve more controllable generation. Specifically, we experimentally find that the responses of classifier-free guidance are highly related to the saliency of generated images. Thus we propose to trust different models in their areas of expertise by blending the predicted noises of two diffusion models in a saliency-aware manner. SNB is training-free and can be completed within a DDIM sampling process. Additionally, it can automatically align the semantics of two noise spaces without requiring additional annotations such as masks. Extensive experiments show the impressive effectiveness of SNB in various applications. The project page is available at https://magicfusion.github.io.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Zhao_2023_ICCV, author = {Zhao, Jing and Zheng, Heliang and Wang, Chaoyue and Lan, Long and Yang, Wenjing}, title = {MagicFusion: Boosting Text-to-Image Generation Performance by Fusing Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {22592-22602} }