Calibration of Continual Learning Models

Lanpei Li, Elia Piccoli, Andrea Cossu, Davide Bacciu, Vincenzo Lomonaco; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 4160-4169

Abstract


Continual Learning (CL) focuses on maximizing the predictive performance of a model across a non-stationary stream of data. Unfortunately CL models tend to forget previous knowledge thus often underperforming when compared with an offline model trained jointly on the entire data stream. Given that any CL model will eventually make mistakes it is of crucial importance to build calibrated CL models: models that can reliably tell their confidence when making a prediction. Model calibration is an active research topic in machine learning yet to be properly investigated in CL. We provide the first empirical study of the behavior of calibration approaches in CL showing that CL strategies do not inherently learn calibrated models. To mitigate this issue we design a continual calibration approach that improves the performance of post-processing calibration methods over a wide range of different benchmarks and CL strategies. CL does not necessarily need perfect predictive models but rather it can benefit from reliable predictive models. We believe our study on continual calibration represents a first step towards this direction.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Li_2024_CVPR, author = {Li, Lanpei and Piccoli, Elia and Cossu, Andrea and Bacciu, Davide and Lomonaco, Vincenzo}, title = {Calibration of Continual Learning Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {4160-4169} }