Generalized ODIN: Detecting Out-of-Distribution Image Without Learning From Out-of-Distribution Data

Yen-Chang Hsu, Yilin Shen, Hongxia Jin, Zsolt Kira; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 10951-10960

Abstract


Deep neural networks have attained remarkable performance when applied to data that comes from the same distribution as that of the training set, but can significantly degrade otherwise. Therefore, detecting whether an example is out-of-distribution (OoD) is crucial to enable a system that can reject such samples or alert users. Recent works have made significant progress on OoD benchmarks consisting of small image datasets. However, many recent methods based on neural networks rely on training or tuning with both in-distribution and out-of-distribution data. The latter is generally hard to define a-priori, and its selection can easily bias the learning. We base our work on a popular method ODIN, proposing two strategies for freeing it from the needs of tuning with OoD data, while improving its OoD detection performance. We specifically propose to decompose confidence scoring as well as a modified input pre-processing method. We show that both of these significantly help in detection performance. Our further analysis on a larger scale image dataset shows that the two types of distribution shifts, specifically semantic shift and non-semantic shift, present a significant difference in the difficulty of the problem, providing an analysis of when ODIN-like strategies do or do not work.

Related Material


[pdf] [supp] [arXiv] [video]
[bibtex]
@InProceedings{Hsu_2020_CVPR,
author = {Hsu, Yen-Chang and Shen, Yilin and Jin, Hongxia and Kira, Zsolt},
title = {Generalized ODIN: Detecting Out-of-Distribution Image Without Learning From Out-of-Distribution Data},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}