L1-Norm Gradient Penalty for Noise Reduction of Attribution Maps

Keisuke Kiritoshi, Ryosuke Tanno, Tomonori Izumitani; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019, pp. 118-121

Abstract


Determining the attribution of the input elements to the output values is very important for interpretability when we use deep neural network (DNN) models in real-world tasks. Gradient-based methods are widely used because they can represent the relationship between each input and output pair in the shape of a partial derivative. Attribution values determined from DNN models that use batch normalization include high levels of noise. This is problematic because it significantly reduces the interpretability of the model. To obtain sparse and interpretable attribution maps, we developed a new regularization method that includes a penalty term, based on the L1-norm of gradient values calculated through back-propagation procedures, in the loss function. We evaluated the effectiveness of the method using CIFAR-10 image datasets.

Related Material


[pdf]
[bibtex]
@InProceedings{Kiritoshi_2019_CVPR_Workshops,
author = {Kiritoshi, Keisuke and Tanno, Ryosuke and Izumitani, Tomonori},
title = {L1-Norm Gradient Penalty for Noise Reduction of Attribution Maps},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}