XYLayoutLM: Towards Layout-Aware Multimodal Networks for Visually-Rich Document Understanding

Zhangxuan Gu, Changhua Meng, Ke Wang, Jun Lan, Weiqiang Wang, Ming Gu, Liqing Zhang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 4583-4592

Abstract


Recently, various multimodal networks for Visually-Rich Document Understanding(VRDU) have been proposed, showing the promotion of transformers by integrating visual and layout information with the text embeddings. However, most existing approaches utilize the position embeddings to incorporate the sequence information, neglecting the noisy improper reading order obtained by OCR tools. In this paper, we propose a robust layout-aware multimodal network named XYLayoutLM to capture and leverage rich layout information from proper reading orders produced by our Augmented XY Cut. Moreover, a Dilated Conditional Position Encoding module is proposed to deal with the input sequence of variable lengths, and it additionally extracts local layout information from both textual and visual modalities while generating position embeddings. Experiment results show that our XYLayoutLM achieves competitive results on document understanding tasks.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Gu_2022_CVPR, author = {Gu, Zhangxuan and Meng, Changhua and Wang, Ke and Lan, Jun and Wang, Weiqiang and Gu, Ming and Zhang, Liqing}, title = {XYLayoutLM: Towards Layout-Aware Multimodal Networks for Visually-Rich Document Understanding}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {4583-4592} }