Localize, Group, and Select: Boosting Text-VQA by Scene Text Modeling

Xiaopeng Lu, Zhen Fan, Yansen Wang, Jean Oh, Carolyn P. Rosé; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2021, pp. 2631-2639

Abstract


As an important task in multimodal context understanding, Text-VQA aims at question answering through reading text information in images. It differentiates from the original VQA task as Text-VQA requires large amounts of scene-text relationship understanding, in addition to the cross-modal grounding capability. In this paper, we propose LOGOS (Localize, Group, and Select), a novel model which attempts to tackle this problem from multiple aspects. LOGOS leverages two grounding tasks to better localize the key information of the image, utilizes scene text clustering to group individual OCR tokens, and learns to select the best answer from different sources of OCR texts. Experiments show that LOGOS outperforms previous state-of-the-art methods on two Text-VQA benchmarks without using additional OCR annotation data. Ablation studies and analysis demonstrate the capability of LOGOS to bridge different modalities and better understand scene text.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Lu_2021_ICCV, author = {Lu, Xiaopeng and Fan, Zhen and Wang, Yansen and Oh, Jean and Ros\'e, Carolyn P.}, title = {Localize, Group, and Select: Boosting Text-VQA by Scene Text Modeling}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2021}, pages = {2631-2639} }