Simplification Is All You Need against Out-of-Distribution Overconfidence

Keke Tang, Chao Hou, Weilong Peng, Xiang Fang, Zhize Wu, Yongwei Nie, Wenping Wang, Zhihong Tian; Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), 2025, pp. 5030-5040

Abstract


Deep neural networks (DNNs) often exhibit out-of-distribution (OOD) overconfidence, producing overly confident predictions on OOD samples. We attribute this issue to the inherent over-complexity of DNNs and investigate two key aspects: capacity and nonlinearity. First, we demonstrate that reducing model capacity through knowledge distillation can effectively mitigate OOD overconfidence. Second, we show that selectively reducing nonlinearity by removing ReLU operations further alleviates the issue. Building on these findings, we present a practical guide to model simplification, combining both strategies to significantly reduce OOD overconfidence. Extensive experiments validate the effectiveness of this approach in mitigating OOD overconfidence and demonstrate its superiority over state-of-the-art methods. Additionally, our simplification strategies can be combined with existing OOD detection techniques to further enhance OOD detection performance.

Related Material


[pdf]
[bibtex]
@InProceedings{Tang_2025_CVPR, author = {Tang, Keke and Hou, Chao and Peng, Weilong and Fang, Xiang and Wu, Zhize and Nie, Yongwei and Wang, Wenping and Tian, Zhihong}, title = {Simplification Is All You Need against Out-of-Distribution Overconfidence}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {5030-5040} }