Reevaluating the Safety Impact of Inherent Interpretability on Deep Neural Networks for Pedestrian Detection

Patrick Feifel, Frank Bonarens, Frank Koster; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2021, pp. 29-37

Abstract


AI-based perception is a key factor towards the automation of driving systems. A conclusive safety argumentation must provide evidence for safe functioning. Existing safety standards are not suitable to deal with non-interpretable deep neural networks (DNN) learning from unstructured data. This work provides a proof of concept for a comprehensible requirements analysis based on an interpretable DNN. Recent work on interpretability motivates to rethink software considerations of safety standards. We describe the application of established considerations to DNNs by integrating interpretability and identifying artifacts. DNN artifacts result from a meaningful decomposition of requirements and adaptions of the perception pipeline. To prove our concept, we propose an interpretable method for the center, scale and prototype prediction (CSPP) that learns an explicitly structured latent space. The interpretability-based requirements analysis of CSPP is completed by tracing artifacts and source code to decomposed requirements. Finally, qualitative post-hoc evaluations provide evidence for the fulfillment of defined requirements for the latent space.

Related Material


[pdf]
[bibtex]
@InProceedings{Feifel_2021_CVPR, author = {Feifel, Patrick and Bonarens, Frank and Koster, Frank}, title = {Reevaluating the Safety Impact of Inherent Interpretability on Deep Neural Networks for Pedestrian Detection}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {29-37} }