Sketch Tokens: A Learned Mid-level Representation for Contour and Object Detection

Joseph J. Lim, C. L. Zitnick, Piotr Dollar; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 3158-3165

Abstract


We propose a novel approach to both learning and detecting local contour-based representations for mid-level features. Our features, called sketch tokens, are learned using supervised mid-level information in the form of hand drawn contours in images. Patches of human generated contours are clustered to form sketch token classes and a random forest classifier is used for efficient detection in novel images. We demonstrate our approach on both topdown and bottom-up tasks. We show state-of-the-art results on the top-down task of contour detection while being over 200x faster than competing methods. We also achieve large improvements in detection accuracy for the bottom-up tasks of pedestrian and object detection as measured on INRIA [5] and PASCAL [10], respectively. These gains are due to the complementary information provided by sketch tokens to low-level features such as gradient histograms.

Related Material


[pdf]
[bibtex]
@InProceedings{Lim_2013_CVPR,
author = {Lim, Joseph J. and Zitnick, C. L. and Dollar, Piotr},
title = {Sketch Tokens: A Learned Mid-level Representation for Contour and Object Detection},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2013}
}