Natural Language Guided Visual Relationship Detection

Wentong Liao, Bodo Rosenhahn, Ling Shuai, Michael Ying Yang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019, pp. 0-0

Abstract


Reasoning about the relationships between object pairs in images is a crucial task for holistic scene understanding. Most of the existing works treat this task as a pure visual classification task: each type of relationship or phrase is classified as a relation category based on the extracted visual features. However, each kind of relationships has a wide variety of object combination and each pair of objects has diverse interactions. Obtaining sufficient training samples for all possible relationship categories is difficult and expensive. In this work, we propose a natural language guided framework to tackle this problem. We propose to use a generic bi-directional recurrent neural network to predict the semantic connection between the participating objects in the relationship from the aspect of natural language. The proposed simple method achieves the state-of-the-art on the Visual Relationship Detection (VRD) and Visual Genome datasets, especially when predicting unseen relationships (e.g., recall improved from 76.42% to 89.79% on VRD zeroshot testing set).

Related Material


[pdf] [dataset]
[bibtex]
@InProceedings{Liao_2019_CVPR_Workshops,
author = {Liao, Wentong and Rosenhahn, Bodo and Shuai, Ling and Ying Yang, Michael},
title = {Natural Language Guided Visual Relationship Detection},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}