Transformer-Based Fusion of 2D-Pose and Spatio-Temporal Embeddings for Distracted Driver Action Recognition

Erkut Akdag, Zeqi Zhu, Egor Bondarev, Peter H.N. de With; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2023, pp. 5453-5462

Abstract


Classification and localization of driving actions over time is important for advanced driver-assistance systems and naturalistic driving studies. Temporal localization is challenging because it requires robustness, reliability, and accuracy. In this study, we aim to improve the temporal localization and classification accuracy performance by adapting video action recognition and 2D human-pose estimation networks to one model. Therefore, we design a transformer-based fusion architecture to effectively combine 2D-pose features and spatio-temporal features. The model uses 2D-pose features as the positional embedding of the transformer architecture and spatio-temporal features as the main input to the encoder of the transformer. The proposed solution is generic and independent of the camera numbers and positions, giving frame-based class probabilities as output. Finally, the post-processing step combines information from different camera views to obtain final predictions and eliminate false positives. The model performs well on the A2 test set of the 2023 NVIDIA AI City Challenge for naturalistic driving action recognition, achieving the overlap score of the organizer-defined distracted driver behaviour metric of 0.5079.

Related Material


[pdf]
[bibtex]
@InProceedings{Akdag_2023_CVPR, author = {Akdag, Erkut and Zhu, Zeqi and Bondarev, Egor and de With, Peter H.N.}, title = {Transformer-Based Fusion of 2D-Pose and Spatio-Temporal Embeddings for Distracted Driver Action Recognition}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {5453-5462} }