Enhancing Traffic Safety with Parallel Dense Video Captioning for End-to-End Event Analysis

Maged Shoman, Dongdong Wang, Armstrong Aboah, Mohamed Abdel-Aty; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 7125-7133

Abstract


This paper introduces our solution for Track 2 in AI City Challenge 2024. The task aims to solve traffic safety description and analysis with the dataset of Woven Traffic Safety (WTS) a real-world Pedestrian-Centric Traffic Video Dataset for Fine-grained Spatial-Temporal Understanding. Our solution mainly focuses on the following points: 1) To solve dense video captioning we leverage the framework of dense video captioning with parallel decoding (PDVC) to model visual-language sequences and generate dense caption by chapters for video. 2) Our work leverages CLIP to extract visual features to more efficiently perform cross-modality training between visual and textual representations. 3) We conduct domain-specific model adaptation to mitigate domain shift problem that poses recognition challenge in video understanding. 4) Moreover we leverage BDD-5K captioned videos to conduct knowledge transfer for better understanding WTS videos and more accurate captioning. Our solution has yielded on the test set achieving 6th place in the competition. The open-source code will be available at https://github.com/UCF-SST-Lab/AICity2024CVPRW

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Shoman_2024_CVPR, author = {Shoman, Maged and Wang, Dongdong and Aboah, Armstrong and Abdel-Aty, Mohamed}, title = {Enhancing Traffic Safety with Parallel Dense Video Captioning for End-to-End Event Analysis}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {7125-7133} }