-
[pdf]
[bibtex]@InProceedings{Cai_2024_CVPR, author = {Cai, Yusong and Ling, Shimou and Zhang, Liang and Pan, Lili and Li, Hongliang}, title = {Is Our Continual Learner Reliable? Investigating Its Decision Attribution Stability through SHAP Value Consistency}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {5568-5575} }
Is Our Continual Learner Reliable? Investigating Its Decision Attribution Stability through SHAP Value Consistency
Abstract
In this work we identify continual learning (CL) methods' inherent differences in sequential decision attribution. In the sequential learning process inconsistent decision attribution may undermine the interpretability of a continual learner. However existing CL evaluation metrics as well as current interpretability methods cannot measure the decision attribution stability of a continual learner. To bridge the gap we introduce Shapley value a well-known decision attribution theory and define SHAP value consistency (SHAPC) to measure the consistency of a continual learner's decision attribution. Furthermore we define the mean and the variance of SHAPC values namely SHAPC-Mean and SHAPC-Var to jointly evaluate the decision attribution stability of continual learners over sequential tasks. On Split CIFAR-10 Split CIFAR-100 and Split TinyImageNet we compare the decision attribution stability of different CL methods using the proposed metrics providing a new perspective for evaluating their reliability.
Related Material
