Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem

Matthias Hein, Maksym Andriushchenko, Julian Bitterwolf; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019, pp. 58-74

Abstract


Classifiers used in the wild, in particular for safety-critical systems, should know when they don't know, in particular make low confidence predictions far away from the training data. We show that ReLU type neural networks fail in this regard as they produce almost always high confidence predictions far away from the training data. For bounded domains we propose a new robust optimization technique similar to adversarial training which enforces low confidence predictions far away from the training data. We show that this technique is surprisingly effective in reducing the confidence of predictions far away from the training data while maintaining high confidence predictions and test error on the original classification task compared to standard training. This is a short version of the corresponding CVPR paper.

Related Material


[pdf] [dataset]
[bibtex]
@InProceedings{Hein_2019_CVPR_Workshops,
author = {Hein, Matthias and Andriushchenko, Maksym and Bitterwolf, Julian},
title = {Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}