Compact Trilinear Interaction for Visual Question Answering

Tuong Do, Thanh-Toan Do, Huy Tran, Erman Tjiputra, Quang D. Tran; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 392-401

Abstract


In Visual Question Answering (VQA), answers have a great correlation with question meaning and visual contents. Thus, to selectively utilize image, question and answer information, we propose a novel trilinear interaction model which simultaneously learns high level associations between these three inputs. In addition, to overcome the interaction complexity, we introduce a multimodal tensor-based PARALIND decomposition which efficiently parameterizes trilinear teraction between the three inputs. Moreover, knowledge distillation is first time applied in Free-form Opened-ended VQA. It is not only for reducing the computational cost and required memory but also for transferring knowledge from trilinear interaction model to bilinear interaction model. The extensive experiments on benchmarking datasets TDIUC, VQA-2.0, and Visual7W show that the proposed compact trilinear interaction model achieves state-of-the-art results when using a single model on all three datasets.

Related Material


[pdf]
[bibtex]
@InProceedings{Do_2019_ICCV,
author = {Do, Tuong and Do, Thanh-Toan and Tran, Huy and Tjiputra, Erman and Tran, Quang D.},
title = {Compact Trilinear Interaction for Visual Question Answering},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}