Performance Prediction for Semantic Segmentation by a Self-Supervised Image Reconstruction Decoder

Andreas Bär, Marvin Klingner, Jonas Löhdefink, Fabian Hüger, Peter Schlicht, Tim Fingscheidt; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2022, pp. 4399-4408

Abstract


In supervised learning, a deep neural network's performance is measured using ground truth data. In semantic segmentation, ground truth data is sparse, requires an expensive annotation process, and, most importantly, it is not available during online operation. To tackle this problem, recent works propose various forms of performance prediction. However, they either rely on inference data histograms, additional sensors, or additional training data. In this paper, we propose a novel per-image performance prediction for semantic segmentation, with (i) no need for additional sensors (sensor efficiency), (ii) no need for additional training data (data efficiency), and (iii) no need for a dedicated retraining of the semantic segmentation (training efficiency). Specifically, we extend an already trained semantic segmentation network having fixed parameters with an image reconstruction decoder. After training and a subsequent regression, the image reconstruction quality is evaluated to predict the semantic segmentation performance. We demonstrate our method's effectiveness with a new state-of-the-art benchmark both on KITTI and Cityscapes for image-only input methods, on Cityscapes even excelling a LiDAR-supported benchmark.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Bar_2022_CVPR, author = {B\"ar, Andreas and Klingner, Marvin and L\"ohdefink, Jonas and H\"uger, Fabian and Schlicht, Peter and Fingscheidt, Tim}, title = {Performance Prediction for Semantic Segmentation by a Self-Supervised Image Reconstruction Decoder}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2022}, pages = {4399-4408} }