Progressive Attentional Manifold Alignment for Arbitrary Style Transfer

Xuan Luo, Zhen Han, Linkang Yang; Proceedings of the Asian Conference on Computer Vision (ACCV), 2022, pp. 3206-3222

Abstract


Arbitrary style transfer algorithms can generate stylization results with arbitrary content-style image pairs but will distort content structures and bring degraded style patterns. The content distortion problem has been well issued using high-frequency signals, salient maps, and low-level features. However, the style degradation problem is still unsolved. Since there is a considerable semantic discrepancy between content and style features, we assume they follow two different manifold distributions. The style degradation happens because existing methods cannot fully leverage the style statistics to render the content feature that lies on a different manifold. Therefore we designed the progressive attentional manifold alignment (PAMA) to align the content manifold to the style manifold. This module consists of a channel alignment module to emphasize related content and style semantics, an attention module to establish the correspondence between features, and a spatial interpolation module to adaptively align the manifolds. The proposed PAMA can alleviate the style degradation problem and produce state-of-the-art stylization results.

Related Material


[pdf] [supp] [code]
[bibtex]
@InProceedings{Luo_2022_ACCV, author = {Luo, Xuan and Han, Zhen and Yang, Linkang}, title = {Progressive Attentional Manifold Alignment for Arbitrary Style Transfer}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2022}, pages = {3206-3222} }