Driving Through the Concept Gridlock: Unraveling Explainability Bottlenecks in Automated Driving

Jessica Echterhoff, An Yan, Kyungtae Han, Amr Abdelraouf, Rohit Gupta, Julian McAuley; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024, pp. 7346-7355

Abstract


Concept bottleneck models have been successfully used for explainable machine learning by encoding information within the model with a set of human-defined concepts. In the context of human-assisted or autonomous driving, explainability models can help user acceptance and understanding of decisions made by the autonomous vehicle, which can be used to rationalize and explain driver or vehicle behavior. We propose a new approach using concept bottlenecks as visual features for control command predictions and explanations of user and vehicle behavior. We learn a human understandable concept layer that we use to explain sequential driving scenes while learning vehicle control commands. This approach can then be used to determine whether a change in a preferred gap or steering commands from a human (or autonomous vehicle) is led by an external stimulus or change in preferences. We achieve competitive performance to latent visual features while gaining interpretability within our model setup.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Echterhoff_2024_WACV, author = {Echterhoff, Jessica and Yan, An and Han, Kyungtae and Abdelraouf, Amr and Gupta, Rohit and McAuley, Julian}, title = {Driving Through the Concept Gridlock: Unraveling Explainability Bottlenecks in Automated Driving}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2024}, pages = {7346-7355} }