DiffMorpher: Unleashing the Capability of Diffusion Models for Image Morphing

Kaiwen Zhang, Yifan Zhou, Xudong Xu, Bo Dai, Xingang Pan; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 7912-7921

Abstract


Diffusion models have achieved remarkable image generation quality surpassing previous generative models. However a notable limitation of diffusion models in comparison to GANs is their difficulty in smoothly interpolating between two image samples due to their highly unstructured latent space. Such a smooth interpolation is intriguing as it naturally serves as a solution for the image morphing task with many applications. In this work we address this limitation via DiffMorpher an approach that enables smooth and natural image interpolation by harnessing the prior knowledge of a pre-trained diffusion model. Our key idea is to capture the semantics of the two images by fitting two LoRAs to them respectively and interpolate between both the LoRA parameters and the latent noises to ensure a smooth semantic transition where correspondence automatically emerges without the need for annotation. In addition we propose an attention interpolation and injection technique an adaptive normalization adjustment method and a new sampling schedule to further enhance the smoothness between consecutive images. Extensive experiments demonstrate that DiffMorpher achieves starkly better image morphing effects than previous methods across a variety of object categories bridging a critical functional gap that distinguished diffusion models from GANs.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Zhang_2024_CVPR, author = {Zhang, Kaiwen and Zhou, Yifan and Xu, Xudong and Dai, Bo and Pan, Xingang}, title = {DiffMorpher: Unleashing the Capability of Diffusion Models for Image Morphing}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {7912-7921} }