Visual Traffic Knowledge Graph Generation from Scene Images

Yunfei Guo, Fei Yin, Xiao-hui Li, Xudong Yan, Tao Xue, Shuqi Mei, Cheng-Lin Liu; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 21604-21613

Abstract


Although previous works on traffic scene understanding have achieved great success, most of them stop at a lowlevel perception stage, such as road segmentation and lane detection, and few concern high-level understanding. In this paper, we present Visual Traffic Knowledge Graph Generation (VTKGG), a new task for in-depth traffic scene understanding that tries to extract multiple kinds of information and integrate them into a knowledge graph. To achieve this goal, we first introduce a large dataset named CASIATencent Road Scene dataset (RS10K) with comprehensive annotations to support related research. Secondly, we propose a novel traffic scene parsing architecture containing a Hierarchical Graph ATtention network (HGAT) to analyze the heterogeneous elements and their complicated relations in traffic scene images. By hierarchizing the heterogeneous graph and equipping it with cross-level links, our approach exploits the correlation among various elements completely and acquires accurate relations. The experimental results show that our method can effectively generate visual traffic knowledge graphs and achieve state-of-the-art performance.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Guo_2023_ICCV, author = {Guo, Yunfei and Yin, Fei and Li, Xiao-hui and Yan, Xudong and Xue, Tao and Mei, Shuqi and Liu, Cheng-Lin}, title = {Visual Traffic Knowledge Graph Generation from Scene Images}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {21604-21613} }