-
[pdf]
[arXiv]
[bibtex]@InProceedings{Koley_2025_CVPR, author = {Koley, Subhadeep and Dutta, Tapas Kumar and Sain, Aneeshan and Chowdhury, Pinaki Nath and Bhunia, Ayan Kumar and Song, Yi-Zhe}, title = {SketchFusion: Learning Universal Sketch Features through Fusing Foundation Models}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {2556-2567} }
SketchFusion: Learning Universal Sketch Features through Fusing Foundation Models
Abstract
While foundation models have revolutionised computer vision, their effectiveness for sketch understanding remains limited by the unique challenges of abstract, sparse visual inputs. Through systematic analysis, we uncover two fundamental limitations: Stable Diffusion (SD) struggles to extract meaningful features from abstract sketches (unlike its success with photos), and exhibits a pronounced frequency-domain bias that suppresses essential low-frequency components needed for sketch understanding. Rather than costly retraining, we address these limitations by strategically combining SD with CLIP, whose strong semantic understanding naturally compensates for SD's spatial-frequency biases. By dynamically injecting CLIP features into SD's denoising process and adaptively aggregating features across semantic levels, our method achieves state-of-the-art performance in sketch retrieval (+3.35%), recognition (+1.06%), segmentation (+29.42%), and correspondence learning (+21.22%), demonstrating the first truly universal sketch feature representation in the era of foundation models.
Related Material