SAFARI: Versatile and Efficient Evaluations for Robustness of Interpretability

Wei Huang, Xingyu Zhao, Gaojie Jin, Xiaowei Huang; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 1988-1998


Interpretability of Deep Learning (DL) is a barrier to trustworthy AI. Despite great efforts made by the Explainable AI (XAI) community, explanations lack robustness--indistinguishable input perturbations may lead to different XAI results. Thus, it is vital to assess how robust DL interpretability is, given an XAI method. In this paper, we identify several challenges that the state-of-the-art is unable to cope with collectively: i) existing metrics are not comprehensive ii) XAI techniques are highly heterogeneous; iii) misinterpretations are normally rare events. To tackle these challenges, we introduce two black-box evaluation methods, concerning the worst-case interpretation discrepancy and a probabilistic notion of how robust in general, respectively. Genetic Algorithm (GA) with bespoke fitness function is used to solve constrained optimisation for efficient worst-case evaluation. Subset Simulation (SS), dedicated to estimating rare event probabilities, is used for evaluating overall robustness. Experiments show that the accuracy, sensitivity, and efficiency of our methods outperform the state-of-the-arts. Finally, we demonstrate two applications of our methods: ranking robust XAI methods and selecting training schemes to improve both classification and interpretation robustness.

Related Material

[pdf] [supp] [arXiv]
@InProceedings{Huang_2023_ICCV, author = {Huang, Wei and Zhao, Xingyu and Jin, Gaojie and Huang, Xiaowei}, title = {SAFARI: Versatile and Efficient Evaluations for Robustness of Interpretability}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {1988-1998} }