Fixing Overconfidence in Dynamic Neural Networks

Lassi Meronen, Martin Trapp, Andrea Pilzer, Le Yang, Arno Solin; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024, pp. 2680-2690

Abstract


Dynamic neural networks are a recent technique that promises a remedy for the increasing size of modern deep learning models by dynamically adapting their computational cost to the difficulty of the inputs. In this way, the model can adjust to a limited computational budget. However, the poor quality of uncertainty estimates in deep learning models makes it difficult to distinguish between hard and easy samples. To address this challenge, we present a computationally efficient approach for post-hoc uncertainty quantification in dynamic neural networks. We show that adequately quantifying and accounting for both aleatoric and epistemic uncertainty through a probabilistic treatment of the last layers improves the predictive performance and aids decision-making when determining the computational budget. In the experiments, we show improvements on CIFAR-100, ImageNet, and Caltech-256 in terms of accuracy, capturing uncertainty, and calibration error.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Meronen_2024_WACV, author = {Meronen, Lassi and Trapp, Martin and Pilzer, Andrea and Yang, Le and Solin, Arno}, title = {Fixing Overconfidence in Dynamic Neural Networks}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2024}, pages = {2680-2690} }