-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Liu_2024_CVPR, author = {Liu, Chaohu and Yin, Kun and Cao, Haoyu and Jiang, Xinghua and Li, Xin and Liu, Yinsong and Jiang, Deqiang and Sun, Xing and Xu, Linli}, title = {HRVDA: High-Resolution Visual Document Assistant}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {15534-15545} }
HRVDA: High-Resolution Visual Document Assistant
Abstract
Leveraging vast training data multimodal large language models (MLLMs) have demonstrated formidable general visual comprehension capabilities and achieved remarkable performance across various tasks. However their performance in visual document understanding still leaves much room for improvement. This discrepancy is primarily attributed to the fact that visual document understanding is a fine-grained prediction task. In natural scenes MLLMs typically use low-resolution images leading to a substantial loss of visual information. Furthermore general-purpose MLLMs do not excel in handling document-oriented instructions. In this paper we propose a High-Resolution Visual Document Assistant (HRVDA) which bridges the gap between MLLMs and visual document understanding. This model employs a content filtering mechanism and an instruction filtering module to separately filter out the content-agnostic visual tokens and instruction-agnostic visual tokens thereby achieving efficient model training and inference for high-resolution images. In addition we construct a document-oriented visual instruction tuning dataset and apply a multi-stage training strategy to enhance the model's document modeling capabilities. Extensive experiments demonstrate that our model achieves state-of-the-art performance across multiple document understanding datasets while maintaining training efficiency and inference speed comparable to low-resolution models.
Related Material