Interpretable Image Recognition by Constructing Transparent Embedding Space

Jiaqi Wang, Huafeng Liu, Xinyue Wang, Liping Jing; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 895-904


Humans usually explain their reasoning (e.g. classification) by dissecting the image and pointing out the evidence from these parts to the concepts in their minds. Inspired by this cognitive process, several part-level interpretable neural network architectures have been proposed to explain the predictions. However, they suffer from the complex data structure and confusing the effect of the individual part to output category. In this work, an interpretable image recognition deep network is designed by introducing a plug-in transparent embedding space (TesNet) to bridge the high-level input patches (e.g. CNN feature maps) and the output categories. This plug-in embedding space is spanned by transparent basis concepts which are constructed on the Grassmann manifold. These basis concepts are enforced to be category-aware and within-category concepts are orthogonal to each other, which makes sure the embedding space is disentangled. Meanwhile, each basis concept can be traced back to the particular image patches, thus they are transparent and friendly to explain the reasoning process. By comparing with state-of-the-art interpretable methods, TesNet is much more beneficial to classification tasks, esp. providing better interpretability on predictions and improve the final accuracy.

Related Material

[pdf] [supp]
@InProceedings{Wang_2021_ICCV, author = {Wang, Jiaqi and Liu, Huafeng and Wang, Xinyue and Jing, Liping}, title = {Interpretable Image Recognition by Constructing Transparent Embedding Space}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {895-904} }