Interpretable and Fine-Grained Visual Explanations for Convolutional Neural Networks

Jorg Wagner, Jan Mathias Kohler, Tobias Gindele, Leon Hetzel, Jakob Thaddaus Wiedemer, Sven Behnke; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 9097-9107

Abstract


To verify and validate networks, it is essential to gain insight into their decisions, limitations as well as possible shortcomings of training data. In this work, we propose a post-hoc, optimization based visual explanation method, which highlights the evidence in the input image for a specific prediction. Our approach is based on a novel technique to defend against adversarial evidence (i.e. faulty evidence due to artefacts) by filtering gradients during optimization. The defense does not depend on human-tuned parameters. It enables explanations which are both fine-grained and preserve the characteristics of images, such as edges and colors. The explanations are interpretable, suited for visualizing detailed evidence and can be tested as they are valid model inputs. We qualitatively and quantitatively evaluate our approach on a multitude of models and datasets.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Wagner_2019_CVPR,
author = {Wagner, Jorg and Kohler, Jan Mathias and Gindele, Tobias and Hetzel, Leon and Wiedemer, Jakob Thaddaus and Behnke, Sven},
title = {Interpretable and Fine-Grained Visual Explanations for Convolutional Neural Networks},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}