Multimodal Dual Attention Memory for Video Story Question Answering

Kyung-Min Kim, Seong-Ho Choi, Jin-Hwa Kim, Byoung-Tak Zhang; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 673-688

Abstract


We propose a video story question-answering (QA) architecture, Multimodal Dual Attention Memory (MDAM). The key idea is to use a dual attention mechanism with late fusion. MDAM uses self-attention to learn the latent concepts in scene frames and captions. Given a question, MDAM uses the second attention over these latent concepts. Multimodal fusion is performed after the dual attention processes (late fusion). Using this processing pipeline, MDAM learns to infer a high-level vision-language joint representation from an abstraction of the full video content. We evaluate MDAM on PororoQA and MovieQA datasets which have large-scale QA annotations on cartoon videos and movies, respectively. For both datasets, MDAM achieves new state-of-the-art results with significant margins compared to the runner-up models. We confirm the best performance of the dual attention mechanism combined with late fusion by ablation studies. We also perform qualitative analysis by visualizing the inference mechanisms of MDAM.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Kim_2018_ECCV,
author = {Kim, Kyung-Min and Choi, Seong-Ho and Kim, Jin-Hwa and Zhang, Byoung-Tak},
title = {Multimodal Dual Attention Memory for Video Story Question Answering},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}