Out-of-Distribution Detection with Adversarial Outlier Exposure

Thomas Botschen, Konstantin Kirchheim, Frank Ortmeier; Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops, 2025, pp. 4391-4400

Abstract


Machine learning models typically perform reliably only on inputs drawn from the distribution they were trained on, making Out-of-Distribution (OOD) detection essential for safety-critical applications. While exposing models to example outliers during training is one of the most effective ways to enhance OOD detection, recent studies suggest that synthetically generated outliers can also act as regularizers for deep neural networks. In this paper, we propose an augmentation scheme for synthetic outliers that regularizes a classifier's energy function by adversarially lowering the outliers' energy during training. We demonstrate that our method improves OOD detection performance and improves adversarial robustness on OOD data on several image classification benchmarks. Additionally, we show that our approach preserves in-distribution generalization. Our code is publicly available.

Related Material


[pdf]
[bibtex]
@InProceedings{Botschen_2025_CVPR, author = {Botschen, Thomas and Kirchheim, Konstantin and Ortmeier, Frank}, title = {Out-of-Distribution Detection with Adversarial Outlier Exposure}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops}, month = {June}, year = {2025}, pages = {4391-4400} }