CODEs: Chamfer Out-of-Distribution Examples Against Overconfidence Issue

Keke Tang, Dingruibo Miao, Weilong Peng, Jianpeng Wu, Yawen Shi, Zhaoquan Gu, Zhihong Tian, Wenping Wang; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 1153-1162

Abstract


Overconfident predictions on out-of-distribution (OOD) samples is a thorny issue for deep neural networks. The key to resolve the OOD overconfidence issue inherently is to build a subset of OOD samples and then suppress predictions on them. This paper proposes the Chamfer OOD examples (CODEs), whose distribution is close to that of in-distribution samples, and thus could be utilized to alleviate the OOD overconfidence issue effectively by suppressing predictions on them. To obtain CODEs, we first generate seed OOD examples via slicing&splicing operations on in-distribution samples from different categories, and then feed them to the Chamfer generative adversarial network for distribution transformation, without accessing to any extra data. Training with suppressing predictions on CODEs is validated to alleviate the OOD overconfidence issue largely without hurting classification accuracy, and outperform the state-of-the-art methods. Besides, we demonstrate CODEs are useful for improving OOD detection and classification.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Tang_2021_ICCV, author = {Tang, Keke and Miao, Dingruibo and Peng, Weilong and Wu, Jianpeng and Shi, Yawen and Gu, Zhaoquan and Tian, Zhihong and Wang, Wenping}, title = {CODEs: Chamfer Out-of-Distribution Examples Against Overconfidence Issue}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {1153-1162} }