Attack End-to-End Autonomous Driving through Module-Wise Noise

Lu Wang, Tianyuan Zhang, Yikai Han, Muyang Fang, Ting Jin, Jiaqi Kang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 8349-8352

Abstract


With recent breakthroughs in deep neural networks numerous tasks within autonomous driving have exhibited remarkable performance. However deep learning models are susceptible to adversarial attacks presenting significant security risks to autonomous driving systems. Presently end-to-end architectures have emerged as the predominant solution for autonomous driving owing to their collaborative nature across different tasks. Yet the implications of adversarial attacks on such models remain relatively unexplored. In this paper we conduct comprehensive adversarial security research on the modular end-to-end autonomous driving model for the first time. We thoroughly consider the potential vulnerabilities in the model inference process and design a universal attack scheme through module-wise noise injection. We conduct large-scale experiments on the full-stack autonomous driving model and demonstrate that our attack method outperforms previous attack methods. We trust that our research will offer fresh insights into ensuring the safety and reliability of autonomous driving systems.

Related Material


[pdf]
[bibtex]
@InProceedings{Wang_2024_CVPR, author = {Wang, Lu and Zhang, Tianyuan and Han, Yikai and Fang, Muyang and Jin, Ting and Kang, Jiaqi}, title = {Attack End-to-End Autonomous Driving through Module-Wise Noise}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {8349-8352} }