Stacked Latent Attention for Multimodal Reasoning

Haoqi Fan, Jiatong Zhou; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 1072-1080

Abstract


Attention has shown to be a pivotal development in deep learning and has been used for a multitude of multimodal learning tasks such as visual question answering and image captioning. In this work, we pinpoint the potential limitations to the design of a traditional attention model. We identify that 1) current attention mechanisms discard the latent information from intermediate reasoning, losing the positional information already captured by the attention heatmaps and 2) stacked attention, a common way to improve spatial reasoning, may have suboptimal performance because of the vanishing gradient problem. We introduce a novel attention architecture to address these problems, in which all spatial configuration information contained in the intermediate reasoning process is retained in a pathway of convolutional layers. We show that this new attention leads to substantial improvements in multiple multimodal reasoning tasks, including achieving single model performance without using external knowledge comparable to the state-of-the-art on the VQA dataset, as well as clear gains for the image captioning task.

Related Material


[pdf]
[bibtex]
@InProceedings{Fan_2018_CVPR,
author = {Fan, Haoqi and Zhou, Jiatong},
title = {Stacked Latent Attention for Multimodal Reasoning},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}