VQA Therapy: Exploring Answer Differences by Visually Grounding Answers

Chongyan Chen, Samreen Anjum, Danna Gurari; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 15315-15325

Abstract


Visual question answering is a task of predicting the answer to a question about an image. Given that different people can provide different answers to a visual question, we aim to better understand why with answer groundings. We introduce the first dataset that visually grounds each unique answer to each visual question, which we call VQAAnswerTherapy. We then propose two novel problems of predicting whether a visual question has a single answer grounding and localizing all answer groundings. We benchmark modern algorithms for these novel problems to show where they succeed and struggle. The dataset and evaluation server can be found publicly at https://vizwiz.org/tasks-and-datasets/vqa-answer-therapy/.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Chen_2023_ICCV, author = {Chen, Chongyan and Anjum, Samreen and Gurari, Danna}, title = {VQA Therapy: Exploring Answer Differences by Visually Grounding Answers}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {15315-15325} }