Introspection of 2D Object Detection Using Processed Neural Activation Patterns in Automated Driving Systems
While deep neural network (DNN) models have become extremely popular for object detection in automated driving systems (ADS), the dynamic and varied nature of the road traffic environment can still lead to model failures. To address this issue, researchers have recently explored introspection mechanisms, a.k.a, self-assessment, for monitoring the quality of perception in ADS. Subsequently, depending on the situation, these mechanisms can either hand over control to the human driver in SAE Level 3, or initiate a minimum risk maneuver in SAE Level 4 ADS. State-of-the-art introspection mechanisms for ADS train a neural network to learn the relationship between the raw neural activation patterns of the underlying DNN-based perception function per frame and the calculated mean average precision. In this paper, we show that the use of raw activation patterns may contain misleading information for introspecting 2D object detection in ADS. To this end, we investigate how to optimally pre-process these patterns for improving the error detection performance. We evaluate the developed mechanism with and without pre-processing of the raw neural activation patterns and compare its performance with a state-of-the-art algorithm highlighting that for the Berkeley Deep Drive (BDD) dataset, pre-processing reduced the ratio of missed errors by 14% and improved the overall detection performance by 3%.