-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Chung_2024_CVPR, author = {Chung, Jiwoo and Hyun, Sangeek and Heo, Jae-Pil}, title = {Style Injection in Diffusion: A Training-free Approach for Adapting Large-scale Diffusion Models for Style Transfer}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {8795-8805} }
Style Injection in Diffusion: A Training-free Approach for Adapting Large-scale Diffusion Models for Style Transfer
Abstract
Despite the impressive generative capabilities of diffusion models existing diffusion model-based style transfer methods require inference-stage optimization (e.g. fine-tuning or textual inversion of style) which is time-consuming or fails to leverage the generative ability of large-scale diffusion models. To address these issues we introduce a novel artistic style transfer method based on a pre-trained large-scale diffusion model without any optimization. Specifically we manipulate the features of self-attention layers as the way the cross-attention mechanism works; in the generation process substituting the key and value of content with those of style image. This approach provides several desirable characteristics for style transfer including 1) preservation of content by transferring similar styles into similar image patches and 2) transfer of style based on similarity of local texture (e.g. edge) between content and style images. Furthermore we introduce query preservation and attention temperature scaling to mitigate the issue of disruption of original content and initial latent Adaptive Instance Normalization (AdaIN) to deal with the disharmonious color (failure to transfer the colors of style). Our experimental results demonstrate that our proposed method surpasses state-of-the-art methods in both conventional and diffusion-based style transfer baselines.
Related Material