StyleFormer: Real-Time Arbitrary Style Transfer via Parametric Style Composition

Xiaolei Wu, Zhihao Hu, Lu Sheng, Dong Xu; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 14618-14627

Abstract


In this work, we propose a new feed-forward arbitrary style transfer method, referred to as StyleFormer, which can simultaneously fulfill fine-grained style diversity and semantic content coherency. Specifically, our transformer-inspired feature-level stylization method consists of three modules: (a) the style bank generation module for sparse but compact parametric style pattern extraction, (b) the transformer-driven style composition module for content-guided global style composition, and (c) the parametric content modulation module for flexible but faithful stylization. The output stylized images are impressively coherent with the content structure, sensitive to the detailed style variations, but still holistically adhere to the style distributions from the style images. Qualitative and quantitative comparisons as well as comprehensive user studies demonstrate that our StyleFormer outperforms the existing SOTA methods in generating visually plausible stylization results with real-time efficiency.

Related Material


[pdf]
[bibtex]
@InProceedings{Wu_2021_ICCV, author = {Wu, Xiaolei and Hu, Zhihao and Sheng, Lu and Xu, Dong}, title = {StyleFormer: Real-Time Arbitrary Style Transfer via Parametric Style Composition}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {14618-14627} }