Strokelets: A Learned Multi-Scale Representation for Scene Text Recognition

Cong Yao, Xiang Bai, Baoguang Shi, Wenyu Liu; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, pp. 4042-4049

Abstract


Driven by the wide range of applications, scene text detection and recognition have become active research topics in computer vision. Though extensively studied, localizing and reading text in uncontrolled environments remain extremely challenging, due to various interference factors. In this paper, we propose a novel multi-scale representation for scene text recognition. This representation consists of a set of detectable primitives, termed as strokelets, which capture the essential substructures of characters at different granularities. Strokelets possess four distinctive advantages: (1) Usability: automatically learned from bounding box labels; (2) Robustness: insensitive to interference factors; (3) Generality: applicable to variant languages; and (4) Expressivity: effective at describing characters. Extensive experiments on standard benchmarks verify the advantages of strokelets and demonstrate the effectiveness of the proposed algorithm for text recognition.

Related Material


[pdf]
[bibtex]
@InProceedings{Yao_2014_CVPR,
author = {Yao, Cong and Bai, Xiang and Shi, Baoguang and Liu, Wenyu},
title = {Strokelets: A Learned Multi-Scale Representation for Scene Text Recognition},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2014}
}