Gaussian Latent Representations for Uncertainty Estimation Using Mahalanobis Distance in Deep Classifiers

Aishwarya Venkataramanan, Assia Benbihi, Martin Laviale, Cédric Pradalier; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2023, pp. 4488-4497

Abstract


Recent works show that the data distribution in a network's latent space is useful for estimating classification uncertainty and detecting Out-Of-Distribution (OOD) samples. To obtain a well-regularized latent space that is conducive for uncertainty estimation, existing methods bring in significant changes to model architectures and training procedures. In this paper, we present a lightweight and high-performance regularization method for Mahalanobis distance (MD)-based uncertainty prediction, and that requires minimal changes to the network's architecture. To derive Gaussian latent representation favourable for MD calculation, we introduce a self-supervised representation learning method that separates in-class representations into multiple Gaussians. Classes with non-Gaussian representations are automatically identified and dynamically clustered into multiple new classes that are approximately Gaussian. Evaluation on standard OOD benchmarks shows that our method achieves state-of-the-art results on OOD detection and is very competitive on predictive probability calibration. Finally, we show the applicability of our method to a real-life computer vision use case on microorganism classification.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Venkataramanan_2023_ICCV, author = {Venkataramanan, Aishwarya and Benbihi, Assia and Laviale, Martin and Pradalier, C\'edric}, title = {Gaussian Latent Representations for Uncertainty Estimation Using Mahalanobis Distance in Deep Classifiers}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2023}, pages = {4488-4497} }