Style Image Harmonization via Global-Local Style Mutual Guided

Xiao Yan, Yang Lu, Juncheng Shuai, Sanyuan Zhang; Proceedings of the Asian Conference on Computer Vision (ACCV), 2022, pp. 2306-2321

Abstract


The process of style image harmonization is attaching an area of the source image to the target style image to form a harmonious new image. Existing methods generally have problems such as distorted foreground, missing content, and semantic inconsistencies caused by the excessive transfer of local style. In this paper, we present a framework for style image harmonization via global and local styles mutual guided to ameliorate these problems. Specifically, we learn to extract global and local information from the Vision Transformer and Convolutional Neural Networks, and adaptively fuse the two kinds of information under a multi-scale fusion structure to ameliorate disharmony between foreground and background styles. Then we train the blending network GradGAN to smooth the image gradient. Finally, we take both style and gradient into consideration to solve the sudden change in the blended boundary gradient. In addition, supervision is unnecessary in our training process. Our experimental results show that our algorithm can balance global and local styles in the foreground stylization, retaining the original information of the object while keeping the boundary gradient smooth, which is more advanced than other methods.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Yan_2022_ACCV, author = {Yan, Xiao and Lu, Yang and Shuai, Juncheng and Zhang, Sanyuan}, title = {Style Image Harmonization via Global-Local Style Mutual Guided}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2022}, pages = {2306-2321} }