Attentive Relational Networks for Mapping Images to Scene Graphs

Mengshi Qi, Weijian Li, Zhengyuan Yang, Yunhong Wang, Jiebo Luo; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 3957-3966


Scene graph generation refers to the task of automatically mapping an image into a semantic structural graph, which requires correctly labeling each extracted object and their interaction relationships. Despite the recent success in object detection using deep learning techniques, inferring complex contextual relationships and structured graph representations from visual data remains a challenging topic. In this study, we propose a novel Attentive Relational Network that consists of two key modules with an object detection backbone to approach this problem. The first module is a semantic transformation module utilized to capture semantic embedded relation features, by translating visual features and linguistic features into a common semantic space. The other module is a graph self-attention module introduced to embed a joint graph representation through assigning various importance weights to neighboring nodes. Finally, accurate scene graphs are produced by the relation inference module to recognize all entities and corresponding relations. We evaluate our proposed method on the widely-adopted Visual Genome Dataset, and the results demonstrate the effectiveness and superiority of our model.

Related Material

author = {Qi, Mengshi and Li, Weijian and Yang, Zhengyuan and Wang, Yunhong and Luo, Jiebo},
title = {Attentive Relational Networks for Mapping Images to Scene Graphs},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}