Taking a HINT: Leveraging Explanations to Make Vision and Language Models More Grounded

Ramprasaath R. Selvaraju, Stefan Lee, Yilin Shen, Hongxia Jin, Shalini Ghosh, Larry Heck, Dhruv Batra, Devi Parikh; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 2591-2600

Abstract


Many vision and language models suffer from poor visual grounding -- often falling back on easy-to-learn language priors rather than basing their decisions on visual concepts in the image. In this work, we propose a generic approach called Human Importance-aware Network Tuning (HINT) that effectively leverages human demonstrations to improve visual grounding. HINT encourages deep networks to be sensitive to the same input regions as humans. Our approach optimizes the alignment between human attention maps and gradient-based network importances -- ensuring that models learn not just to look at but rather rely on visual concepts that humans found relevant for a task when making predictions. We apply HINT to Visual Question Answering and Image Captioning tasks, outperforming top approaches on splits that penalize over-reliance on language priors (VQA-CP and robust captioning) using human attention demonstrations for just 6% of the training data.

Related Material


[pdf]
[bibtex]
@InProceedings{Selvaraju_2019_ICCV,
author = {Selvaraju, Ramprasaath R. and Lee, Stefan and Shen, Yilin and Jin, Hongxia and Ghosh, Shalini and Heck, Larry and Batra, Dhruv and Parikh, Devi},
title = {Taking a HINT: Leveraging Explanations to Make Vision and Language Models More Grounded},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}