On the Faithfulness of Vision Transformer Explanations

Junyi Wu, Weitai Kang, Hao Tang, Yuan Hong, Yan Yan; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 10936-10945

Abstract


To interpret Vision Transformers post-hoc explanations assign salience scores to input pixels providing human-understandable heatmaps. However whether these interpretations reflect true rationales behind the model's output is still underexplored. To address this gap we study the faithfulness criterion of explanations: the assigned salience scores should represent the influence of the corresponding input pixels on the model's predictions. To evaluate faithfulness we introduce Salience-guided Faithfulness Coefficient (SaCo) a novel evaluation metric leveraging essential information of salience distribution. Specifically we conduct pair-wise comparisons among distinct pixel groups and then aggregate the differences in their salience scores resulting in a coefficient that indicates the explanation's degree of faithfulness. Our explorations reveal that current metrics struggle to differentiate between advanced explanation methods and Random Attribution thereby failing to capture the faithfulness property. In contrast our proposed SaCo offers a reliable faithfulness measurement establishing a robust metric for interpretations. Furthermore our SaCo demonstrates that the use of gradient and multi-layer aggregation can markedly enhance the faithfulness of attention-based explanation shedding light on potential paths for advancing Vision Transformer explainability.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Wu_2024_CVPR, author = {Wu, Junyi and Kang, Weitai and Tang, Hao and Hong, Yuan and Yan, Yan}, title = {On the Faithfulness of Vision Transformer Explanations}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {10936-10945} }