-
[pdf]
[arXiv]
[bibtex]@InProceedings{Bakkali_2025_WACV, author = {Bakkali, Souhail and Biswas, Sanket and Ming, Zuheng and Coustaty, Micka\"el and Rusi\~nol, Mar\c{c}al and Terrades, Oriol Ramos and Llad\'os, Josep}, title = {GlobalDoc: A Cross-Modal Vision-Language Framework for Real-World Document Image Retrieval and Classification}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {1436-1446} }
GlobalDoc: A Cross-Modal Vision-Language Framework for Real-World Document Image Retrieval and Classification
Abstract
Visual document understanding (VDU) has rapidly advanced with the development of powerful multi-modal language models. However these models typically require extensive document pre-training data to learn intermediate representations and often suffer a significant performance drop in real-world online industrial settings. A primary issue is their heavy reliance on OCR engines to extract local positional information within document pages which limits the models' ability to capture global information and hinders their generalizability flexibility and robustness. In this paper we introduce GlobalDoc a cross-modal transformer-based architecture pre-trained in a self-supervised manner using three novel pretext objective tasks. GlobalDoc improves the learning of richer semantic concepts by unifying language and visual representations resulting in more transferable models. For proper evaluation we also propose two novel document-level downstream VDU tasks Few-Shot Document Image Classification (DIC) and Content-based Document Image Retrieval (DIR) designed to simulate industrial scenarios more closely. Extensive experimentation has been conducted to demonstrate GlobalDoc's effectiveness in practical settings.
Related Material