FlipDial: A Generative Model for Two-Way Visual Dialogue

Daniela Massiceti, N. Siddharth, Puneet K. Dokania, Philip H.S. Torr; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 6097-6105

Abstract


We present FlipDial, a generative model for Visual Dialogue that simultaneously plays the role of both participants in a visually-grounded dialogue. Given context in the form of an image and an associated caption summarising the contents of the image, FlipDial learns both to answer questions and put forward questions, capable of generating entire sequences of dialogue (question-answer pairs) which are diverse and relevant to the image. To do this, FlipDial relies on a simple but surprisingly powerful idea: it uses convolutional neural networks (CNNs) to encode entire dialogues directly, implicitly capturing dialogue context, and conditional VAEs to learn the generative model. FlipDial outperforms the state-of-the-art model in the sequential answering task (1VD) on the VisDial dataset by 5 points in Mean Rank using the generated answers. We are the first to extend this paradigm to full two-way visual dialogue (2VD), where our model is capable of generating both questions and answers in sequence based on a visual input, for which we propose a set of novel evaluation measures and metrics.

Related Material


[pdf] [supp] [arXiv] [video]
[bibtex]
@InProceedings{Massiceti_2018_CVPR,
author = {Massiceti, Daniela and Siddharth, N. and Dokania, Puneet K. and Torr, Philip H.S.},
title = {FlipDial: A Generative Model for Two-Way Visual Dialogue},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}