Beyond VQA: Generating Multi-Word Answers and Rationales to Visual Questions

Radhika Dua, Sai Srinivas Kancheti, Vineeth N Balasubramanian; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2021, pp. 1623-1632

Abstract


Visual Question Answering is a multi-modal task that aims to measure high-level visual understanding. Contemporary VQA models are restrictive in the sense that answers are obtained via classification over a limited vocabulary (in the case of open-ended VQA), or via classification over a set of multiple-choice-type answers. In this work, we present a completely generative formulation where a multi-word answer is generated for a visual query. To take this a step forward, we introduce a new task: ViQAR (Visual Question Answering and Reasoning), wherein a model must generate the complete answer and a rationale that seeks to justify the generated answer. We propose an end-to-end architecture to solve this task and describe how to evaluate it. We show that our model generates strong answers and rationales through qualitative and quantitative evaluation, as well as through a human Turing Test.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Dua_2021_CVPR, author = {Dua, Radhika and Kancheti, Sai Srinivas and Balasubramanian, Vineeth N}, title = {Beyond VQA: Generating Multi-Word Answers and Rationales to Visual Questions}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {1623-1632} }