RoDLA: Benchmarking the Robustness of Document Layout Analysis Models

Yufan Chen, Jiaming Zhang, Kunyu Peng, Junwei Zheng, Ruiping Liu, Philip Torr, Rainer Stiefelhagen; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 15556-15566

Abstract


Before developing a Document Layout Analysis (DLA) model in real-world applications conducting comprehensive robustness testing is essential. However the robustness of DLA models remains underexplored in the literature. To address this we are the first to introduce a robustness benchmark for DLA models which includes 450K document images of three datasets. To cover realistic corruptions we propose a perturbation taxonomy with 12 common document perturbations with 3 severity levels inspired by real-world document processing. Additionally to better understand document perturbation impacts we propose two metrics Mean Perturbation Effect (mPE) for perturbation assessment and Mean Robustness Degradation (mRD) for robustness evaluation. Furthermore we introduce a self-titled model i.e. Robust Document Layout Analyzer (RoDLA) which improves attention mechanisms to boost extraction of robust features. Experiments on the proposed benchmarks (PubLayNet-P DocLayNet-P and M6Doc-P) demonstrate that RoDLA obtains state-of-the-art mRD scores of 115.7 135.4 and 150.4 respectively. Compared to previous methods RoDLA achieves notable improvements in mAP of +3.8% +7.1% and +12.1% respectively.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Chen_2024_CVPR, author = {Chen, Yufan and Zhang, Jiaming and Peng, Kunyu and Zheng, Junwei and Liu, Ruiping and Torr, Philip and Stiefelhagen, Rainer}, title = {RoDLA: Benchmarking the Robustness of Document Layout Analysis Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {15556-15566} }