Interpreting Interpretations: Organizing Attribution Methods by Criteria

Zifan Wang, Piotr Mardziel, Anupam Datta, Matt Fredrikson; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020, pp. 10-11

Abstract


Motivated by distinct, though related, criteria, a growing number of attribution methods have been developed to interpret deep learning. While each relies on the interpretability of the concept of "importance" and our ability to visualize patterns, explanations produced by the methods often differ. In this work we expand the foundations of human-understandable concepts with which attributions can be interpreted beyond "importance" and its visualization; we incorporate the logical concepts of necessity and sufficiency, and the concept of proportionality. We define metrics to represent these concepts as quantitative aspects of an attribution. We evaluate our measures on a collection of methods explaining convolutional neural networks (CNN) for image classification. We conclude that some attribution methods are more appropriate for interpretation in terms of necessity while others are in terms of sufficiency, while no method is always the most appropriate in terms of both.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Wang_2020_CVPR_Workshops,
author = {Wang, Zifan and Mardziel, Piotr and Datta, Anupam and Fredrikson, Matt},
title = {Interpreting Interpretations: Organizing Attribution Methods by Criteria},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2020}
}