Handwritten and Printed Text Segmentation: A Signature Case Study

Sina Gholamian, Ali Vahdat; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 582-592

Abstract


While analyzing scanned documents, handwritten text can overlap with printed text. This overlap causes difficulties during the optical character recognition (OCR) and digitization process of documents, and subsequently, hurts downstream NLP tasks. Prior research either focuses solely on the binary classification of handwritten text or performs a three-class segmentation of the document, i.e., recognition of handwritten, printed, and background pixels. This approach results in the assignment of overlapping handwritten and printed pixels to only one of the classes, and thus, they are not accounted for in the other class. Thus, in this research, we develop novel approaches to address the challenges of handwritten and printed text segmentation. Our objective is to recover text from different classes in their entirety, especially enhancing the segmentation performance on overlapping sections. To support this task, we introduce a new dataset, SignaTR6K, collected from real legal documents, as well as a new model architecture for the handwritten and printed text segmentation task. Our best configuration outperforms prior work on two different datasets by 17.9% and 7.3% on IoU scores. The SignaTR6K dataset is accessible for download via the following link: https://forms.office.com/r/2a5RDg7cAY.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Gholamian_2023_ICCV, author = {Gholamian, Sina and Vahdat, Ali}, title = {Handwritten and Printed Text Segmentation: A Signature Case Study}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {582-592} }