SegEQA: Video Segmentation Based Visual Attention for Embodied Question Answering
Haonan Luo, Guosheng Lin, Zichuan Liu, Fayao Liu, Zhenmin Tang, Yazhou Yao; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 9667-9676
Abstract
Embodied Question Answering (EQA) is a newly defined research area where an agent is required to answer the user's questions by exploring the real world environment. It has attracted increasing research interests due to its broad applications in automatic driving system, in-home robots, and personal assistants. Most of the existing methods perform poorly in terms of answering and navigation accuracy due to the absence of local details and vulnerability to the ambiguity caused by complicated vision conditions. To tackle these problems, we propose a segmentation based visual attention mechanism for Embodied Question Answering. Firstly, We extract the local semantic features by introducing a novel high-speed video segmentation framework. Then by the guide of extracted semantic features, a bottom-up visual attention mechanism is proposed for the Visual Question Answering (VQA) sub-task. Further, a feature fusion strategy is proposed to guide the training of the navigator without much additional computational cost. The ablation experiments show that our method boosts the performance of VQA module by 4.2% (68.99% vs 64.73%) and leads to 3.6% (48.59% vs 44.98%) overall improvement in EQA accuracy.
Related Material
[pdf]
[
bibtex]
@InProceedings{Luo_2019_ICCV,
author = {Luo, Haonan and Lin, Guosheng and Liu, Zichuan and Liu, Fayao and Tang, Zhenmin and Yao, Yazhou},
title = {SegEQA: Video Segmentation Based Visual Attention for Embodied Question Answering},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}