MUTAN: Multimodal Tucker Fusion for Visual Question Answering

Hedi Ben-younes, Remi Cadene, Matthieu Cord, Nicolas Thome; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2612-2620

Abstract


Bilinear models provide an appealing framework for mixing and merging information in Visual Question Answering (VQA) tasks. They help to learn high level associations between question meaning and visual concepts in the image, but they suffer from huge dimensionality issues. We introduce MUTAN, a multimodal tensor-based Tucker decomposition to efficiently parametrize bilinear interactions between visual and textual representations. Additionally to the Tucker framework, we design a low-rank matrix-based decomposition to explicitly constrain the interaction rank. With MUTAN, we control the complexity of the merging scheme while keeping nice interpretable fusion relations. We show how the Tucker decomposition framework generalizes some of the latest VQA architectures, providing state-of-the-art results.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Ben-younes_2017_ICCV,
author = {Ben-younes, Hedi and Cadene, Remi and Cord, Matthieu and Thome, Nicolas},
title = {MUTAN: Multimodal Tucker Fusion for Visual Question Answering},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}