From Strings to Things: Knowledge-Enabled VQA Model That Can Read and Reason

Ajeet Kumar Singh, Anand Mishra, Shashank Shekhar, Anirban Chakraborty; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 4602-4612

Abstract


Text present in images are not merely strings, they provide useful cues about the image. Despite their utility in better image understanding, scene texts are not used in traditional visual question answering (VQA) models. In this work, we present a VQA model which can read scene texts and perform reasoning on a knowledge graph to arrive at an accurate answer. Our proposed model has three mutually interacting modules: i. proposal module to get word and visual content proposals from the image, ii. fusion module to fuse these proposals, question and knowledge base to mine relevant facts, and represent these facts as multi-relational graph, iii. reasoning module to perform a novel gated graph neural network based reasoning on this graph. The performance of our knowledge-enabled VQA model is evaluated on our newly introduced dataset, viz. text-KVQA. To the best of our knowledge, this is the first dataset which identifies the need for bridging text recognition with knowledge graph based reasoning. Through extensive experiments, we show that our proposed method outperforms traditional VQA as well as question-answering over knowledge base-based methods on text-KVQA.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Singh_2019_ICCV,
author = {Singh, Ajeet Kumar and Mishra, Anand and Shekhar, Shashank and Chakraborty, Anirban},
title = {From Strings to Things: Knowledge-Enabled VQA Model That Can Read and Reason},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}