Sketch-to-Art: Synthesizing Stylized Art Images From Sketches

Bingchen Liu, Kunpeng Song, Yizhe Zhu, Ahmed Elgammal; Proceedings of the Asian Conference on Computer Vision (ACCV), 2020

Abstract


We propose a new approach for synthesizing fully detailed art-stylized images from sketches. Given a sketch, with no semantic tagging, and a reference image of a specific style, the model can synthesize meaningful details with colors and textures. Based on the GAN framework, the model consists of three novel modules designed explicitly for better artistic style capturing and generation. To enforce the content faithfulness, we introduce the dual-masked mechanism which directly shapes the feature maps according to sketch. To capture more artistic style aspects, we design feature-map transformation for a better style consistency to the reference image. Finally, an inverse process of instance-normalization disentangles the style and content information and further improves the synthesis quality. Experiments demonstrate a significant qualitative and quantitative boost over baseline models based on previous state-of-the-art techniques, modified for the proposed task (17% better Frechet Inception distance and 18% better style classification score). Moreover, the lightweight design of the proposed modules enables the high-quality synthesis at 512 * 512 resolution.

Related Material


[pdf] [supp] [code]
[bibtex]
@InProceedings{Liu_2020_ACCV, author = {Liu, Bingchen and Song, Kunpeng and Zhu, Yizhe and Elgammal, Ahmed}, title = {Sketch-to-Art: Synthesizing Stylized Art Images From Sketches}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {November}, year = {2020} }