Vision Grid Transformer for Document Layout Analysis

Cheng Da, Chuwei Luo, Qi Zheng, Cong Yao; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 19462-19472

Abstract


Document pre-trained models and grid-based models have proven to be very effective on various tasks in Document AI. However, for the document layout analysis (DLA) task, existing document pre-trained models, even those pre-trained in a multi-modal fashion, usually rely on either textual features or visual features. Grid-based models for DLA are multi-modality but largely neglect the effect of pre-training. To fully leverage multi-modal information and exploit pre-training techniques to learn better representation for DLA, in this paper, we present VGT, a two-stream Vision Grid Transformer, in which Grid Transformer (GiT) is proposed and pre-trained for 2D token-level and segment-level semantic understanding. Furthermore, a new dataset named D^4LA, which is so far the most diverse and detailed manually-annotated benchmark for document layout analysis, is curated and released. Experiment results have illustrated that the proposed VGT model achieves new state-of-the-art results on DLA tasks, e.g. PubLayNet (95.7% to 96.2%), DocBank (79.6% to 84.1%), and D^4LA (67.7% to 68.8%). The code and models as well as the D4LA dataset will be made publicly available.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Da_2023_ICCV, author = {Da, Cheng and Luo, Chuwei and Zheng, Qi and Yao, Cong}, title = {Vision Grid Transformer for Document Layout Analysis}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {19462-19472} }