DewarpNet: Single-Image Document Unwarping With Stacked 3D and 2D Regression Networks

Sagnik Das, Ke Ma, Zhixin Shu, Dimitris Samaras, Roy Shilkrot; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 131-140

Abstract


Capturing document images with hand-held devices in unstructured environments is a common practice nowadays. However, "casual" photos of documents are usually unsuitable for automatic information extraction, mainly due to physical distortion of the document paper, as well as various camera positions and illumination conditions. In this work, we propose DewarpNet, a deep-learning approach for document image unwarping from a single image. Our insight is that the 3D geometry of the document not only determines the warping of its texture but also causes the illumination effects. Therefore, our novelty resides on the explicit modeling of 3D shape for document paper in an end-to-end pipeline. Also, we contribute the largest and most comprehensive dataset for document image unwarping to date - Doc3D. This dataset features multiple ground-truth annotations, including 3D shape, surface normals, UV map, albedo image, etc. Training with Doc3D, we demonstrate state-of-the-art performance for DewarpNet with extensive qualitative and quantitative evaluations. Our network also significantly improves OCR performance on captured document images, decreasing character error rate by 42% on average. Both the code and the dataset are released.

Related Material


[pdf]
[bibtex]
@InProceedings{Das_2019_ICCV,
author = {Das, Sagnik and Ma, Ke and Shu, Zhixin and Samaras, Dimitris and Shilkrot, Roy},
title = {DewarpNet: Single-Image Document Unwarping With Stacked 3D and 2D Regression Networks},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}