RCL: Reliable Continual Learning for Unified Failure Detection

Fei Zhu, Zhen Cheng, Xu-Yao Zhang, Cheng-Lin Liu, Zhaoxiang Zhang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 12140-12150

Abstract


Deep neural networks are known to be overconfident for what they don't know in the wild which is undesirable for decision-making in high-stakes applications. Despite quantities of existing works most of them focus on detecting out-of-distribution (OOD) samples from unseen classes while ignoring large parts of relevant failure sources like misclassified samples from known classes. In particular recent studies reveal that prevalent OOD detection methods are actually harmful for misclassification detection (MisD) indicating that there seems to be a tradeoff between those two tasks. In this paper we study the critical yet under-explored problem of unified failure detection which aims to detect both misclassified and OOD examples. Concretely we identify the failure of simply integrating learning objectives of misclassification and OOD detection and show the potential of sequence learning. Inspired by this we propose a reliable continual learning paradigm whose spirit is to equip the model with MisD ability first and then improve the OOD detection ability without degrading the already adequate MisD performance. Extensive experiments demonstrate that our method achieves strong unified failure detection performance. The code is available at https://github.com/Impression2805/RCL.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Zhu_2024_CVPR, author = {Zhu, Fei and Cheng, Zhen and Zhang, Xu-Yao and Liu, Cheng-Lin and Zhang, Zhaoxiang}, title = {RCL: Reliable Continual Learning for Unified Failure Detection}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {12140-12150} }