Illegible Text to Readable Text: An Image-to-Image Transformation Using Conditional Sliced Wasserstein Adversarial Networks

Mostafa Karimi, Gopalkrishna Veni, Yen-Yun Yu; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020, pp. 552-553

Abstract


Automatic text recognition from ancient handwritten record images is an important problem in the genealogy domain. However, critical challenges such as varying noise conditions, vanishing texts, and variations in handwriting makes the recognition task difficult. We tackle this problem by developing a handwritten-to-machine-print conditional Generative Adversarial network (HW2MP-GAN) model that formulates handwritten recognition as a text-Image-to-text-Image translation problem where a given image, typically in an illegible form, is converted into another image, close to its machine-print form. The proposed model consists of three-components including a generator, and word-level and character-level discriminators. The model incorporates Sliced Wasserstein distance (SWD) and U-Net architectures in HW2MP-GAN for better quality image-to-image transformation. Our experiments reveal that HW2MP-GAN outperforms state-of-the-art baseline cGAN models by almost 30 in Frechet Handwritten Distance (FHD), 0.6 in average Levenshtein distance and 39% in word accuracy for image-to-image translation on IAM database. Further, HW2MP-GAN improves handwritten recognition word accuracy by 1.3% compared to baseline handwritten recognition models on IAM database.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Karimi_2020_CVPR_Workshops,
author = {Karimi, Mostafa and Veni, Gopalkrishna and Yu, Yen-Yun},
title = {Illegible Text to Readable Text: An Image-to-Image Transformation Using Conditional Sliced Wasserstein Adversarial Networks},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2020}
}