Diffusion-Enhanced PatchMatch: A Framework for Arbitrary Style Transfer With Diffusion Models

Mark Hamazaspyan, Shant Navasardyan; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2023, pp. 797-805

Abstract


Diffusion models have gained immense popularity in recent years due to their impressive ability to generate high-quality images. The opportunities that diffusion models provide are numerous, from text-to-image synthesis to image restoration and enhancement, as well as image compression and inpainting. However, expressing image style in words can be a challenging task, making it difficult for diffusion models to generate images with specific style without additional optimization techniques. In this paper, we present a novel method, Diffusion-Enhanced PatchMatch (DEPM), that leverages Stable Diffusion for style transfer without any finetuning or pretraining. DEPM captures high-level style features while preserving the fine-grained texture details of the original image. By enabling the transfer of arbitrary styles during inference, our approach makes the process more flexible and efficient. Moreover, its optimization-free nature makes it accessible to a wide range of users.

Related Material


[pdf]
[bibtex]
@InProceedings{Hamazaspyan_2023_CVPR, author = {Hamazaspyan, Mark and Navasardyan, Shant}, title = {Diffusion-Enhanced PatchMatch: A Framework for Arbitrary Style Transfer With Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {797-805} }