Reevaluating the Safety Impact of Inherent Interpretability on Deep Neural Networks for Pedestrian Detection
AI-based perception is a key factor towards the automation of driving systems. A conclusive safety argumentation must provide evidence for safe functioning. Existing safety standards are not suitable to deal with non-interpretable deep neural networks (DNN) learning from unstructured data. This work provides a proof of concept for a comprehensible requirements analysis based on an interpretable DNN. Recent work on interpretability motivates to rethink software considerations of safety standards. We describe the application of established considerations to DNNs by integrating interpretability and identifying artifacts. DNN artifacts result from a meaningful decomposition of requirements and adaptions of the perception pipeline. To prove our concept, we propose an interpretable method for the center, scale and prototype prediction (CSPP) that learns an explicitly structured latent space. The interpretability-based requirements analysis of CSPP is completed by tracing artifacts and source code to decomposed requirements. Finally, qualitative post-hoc evaluations provide evidence for the fulfillment of defined requirements for the latent space.