Visual Reasoning with Multi-hop Feature Modulation
Florian Strub, Mathieu Seurin, Ethan Perez, Harm de Vries, Jeremie Mary, Philippe Preux, Aaron CourvilleOlivier Pietquin; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 784-800
Abstract
Recent breakthroughs in computer vision and natural language processing have spurred interest in challenging multi-modal tasks such as visual question-answering and visual dialogue. For such tasks, one successful approach is to condition image-based convolutional network computation on language via Feature-wise Linear Modulation (FiLM) layers, i.e., per-channel scaling and shifting. By alternating between attending to the language input and generating FiLM layer parameters, this approach is better able to scale to settings with longer input sequences such as dialogue. We demonstrate that multi-hop FiLM generation significantly outperforms prior state-of-the-art on the GuessWhat?! visual dialogue task and matches state-of-the art on the ReferIt object retrieval task, and we provide additional qualitative analysis.
Related Material
[pdf]
[arXiv]
[
bibtex]
@InProceedings{Strub_2018_ECCV,
author = {Strub, Florian and Seurin, Mathieu and Perez, Ethan and de Vries, Harm and Mary, Jeremie and Preux, Philippe and Pietquin, Aaron CourvilleOlivier},
title = {Visual Reasoning with Multi-hop Feature Modulation},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}