-
[pdf]
[supp]
[bibtex]@InProceedings{Yang_2025_ICCV, author = {Yang, Rui and Li, Huining and Long, Yiyi and Wu, Xiaojun and He, Shengfeng}, title = {Stroke2Sketch: Harnessing Stroke Attributes for Training-Free Sketch Generation}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {16545-16554} }
Stroke2Sketch: Harnessing Stroke Attributes for Training-Free Sketch Generation
Abstract
Generating sketches guided by reference styles requires precise transfer of stroke attributes, such as line thickness, deformation, and texture sparsity, while preserving semantic structure and content fidelity. To this end, we propose Stroke2Sketch, a novel training-free framework that introduces cross-image stroke attention, a mechanism embedded within self-attention layers to establish fine-grained semantic correspondences and enable accurate stroke attribute transfer. This allows our method to adaptively integrate reference stroke characteristics into content images while maintaining structural integrity. Additionally, we develop adaptive contrast enhancement and semantic-focused attention to reinforce content preservation and foreground emphasis. Stroke2Sketch effectively synthesizes stylistically faithful sketches that closely resemble handcrafted results, outperforming existing methods in expressive stroke control and semantic coherence.
Related Material
