Sketchformer: Transformer-Based Representation for Sketched Structure

Leo Sampaio Ferraz Ribeiro, Tu Bui, John Collomosse, Moacir Ponti; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 14153-14162

Abstract


Sketchformer is a novel transformer-based representation for encoding free-hand sketches input in a vector form, i.e. as a sequence of strokes. Sketchformer effectively addresses multiple tasks: sketch classification, sketch based image retrieval (SBIR), and the reconstruction and interpolation of sketches. We report several variants exploring continuous and tokenized input representations, and contrast their performance. Our learned embedding, driven by a dictionary learning tokenization scheme, yields state of the art performance in classification and image retrieval tasks, when compared against baseline representations driven by LSTM sequence to sequence architectures: SketchRNN and derivatives. We show that sketch reconstruction and interpolation are improved significantly by the Sketchformer embedding for complex sketches with longer stroke sequences.

Related Material


[pdf]
[bibtex]
@InProceedings{Ribeiro_2020_CVPR,
author = {Ribeiro, Leo Sampaio Ferraz and Bui, Tu and Collomosse, John and Ponti, Moacir},
title = {Sketchformer: Transformer-Based Representation for Sketched Structure},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}