Interpretable Convolutional Neural Networks

Quanshi Zhang, Ying Nian Wu, Song-Chun Zhu; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 8827-8836


This paper proposes a method to modify a traditional convolutional neural network (CNN) into an interpretable CNN, in order to clarify knowledge representations in high conv-layers of the CNN. In an interpretable CNN, each filter in a high conv-layer represents a specific object part. Our interpretable CNNs use the same training data as ordinary CNNs without a need for any annotations of object parts or textures for supervision. The interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process. We can apply our method to different types of CNNs with various structures. The explicit knowledge representation in an interpretable CNN can help people understand the logic inside a CNN, i.e., what patterns are memorized by the CNN for prediction. Experiments have shown that filters in an interpretable CNN are more semantically meaningful than those in a traditional CNN. The code is available at

Related Material

[pdf] [arXiv]
author = {Zhang, Quanshi and Nian Wu, Ying and Zhu, Song-Chun},
title = {Interpretable Convolutional Neural Networks},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}