Explaining Neural Networks Semantically and Quantitatively

Runjin Chen, Hao Chen, Jie Ren, Ge Huang, Quanshi Zhang; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 9187-9196

Abstract


This paper presents a method to pursue a semantic and quantitative explanation for the knowledge encoded in a convolutional neural network (CNN). The estimation of the specific rationale of each prediction made by the CNN presents a key issue of understanding neural networks, and it is of significant values in real applications. In this study, we propose to distill knowledge from the CNN into an explainable additive model, which explains the CNN prediction quantitatively. We discuss the problem of the biased interpretation of CNN predictions. To overcome the biased interpretation, we develop prior losses to guide the learning of the explainable additive model. Experimental results have demonstrated the effectiveness of our method.

Related Material


[pdf] [video]
[bibtex]
@InProceedings{Chen_2019_ICCV,
author = {Chen, Runjin and Chen, Hao and Ren, Jie and Huang, Ge and Zhang, Quanshi},
title = {Explaining Neural Networks Semantically and Quantitatively},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}