-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Yang_2023_CVPR, author = {Yang, Ruo and Wang, Binghui and Bilgic, Mustafa}, title = {IDGI: A Framework To Eliminate Explanation Noise From Integrated Gradients}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2023}, pages = {23725-23734} }
IDGI: A Framework To Eliminate Explanation Noise From Integrated Gradients
Abstract
Integrated Gradients (IG) as well as its variants are well-known techniques for interpreting the decisions of deep neural networks. While IG-based approaches attain state-of-the-art performance, they often integrate noise into their explanation saliency maps, which reduce their interpretability. To minimize the noise, we examine the source of the noise analytically and propose a new approach to reduce the explanation noise based on our analytical findings. We propose the Important Direction Gradient Integration (IDGI) framework, which can be easily incorporated into any IG-based method that uses the Reimann Integration for integrated gradient computation. Extensive experiments with three IG-based methods show that IDGI improves them drastically on numerous interpretability metrics.
Related Material