-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Wu_2025_WACV, author = {Wu, Haoning and Shen, Shaocheng and Hu, Qiang and Zhang, Xiaoyun and Zhang, Ya and Wang, Yanfeng}, title = {MegaFusion: Extend Diffusion Models towards Higher-Resolution Image Generation without Further Tuning}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {3944-3954} }
MegaFusion: Extend Diffusion Models towards Higher-Resolution Image Generation without Further Tuning
Abstract
Diffusion models have emerged as frontrunners in text-to-image generation but their fixed image resolution during training often leads to challenges in high-resolution image generation such as semantic deviations and object replication. This paper introduces MegaFusion a novel approach that extends existing diffusion-based text-to-image models towards efficient higher-resolution generation without additional fine-tuning or adaptation. Specifically we employ an innovative truncate and relay strategy to bridge the denoising processes across different resolutions allowing for high-resolution image generation in a coarse-to-fine manner. Moreover by integrating dilated convolutions and noise re-scheduling we further adapt the model's priors for higher resolution. The versatility and efficacy of MegaFusion make it universally applicable to both latent-space and pixel-space diffusion models along with other derivative models. Extensive experiments confirm that MegaFusion significantly boosts the capability of existing models to produce images of megapixels and various aspect ratios while only requiring about 40% of the original computational cost. Code is available at https://haoningwu3639.github.io/MegaFusion/.
Related Material