Interpretation of Feature Space using Multi-Channel Attentional Sub-Networks

Masanari Kimura, Masayuki Tanaka; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019, pp. 36-39

Abstract


Convolutional Neural Networks have achieved impressive results in various tasks, but interpreting the internal mechanism is a challenging problem. To tackle this problem, we exploit a multi-channel attention mechanism in feature space. Our network architecture allows us to obtain an attention mask for each feature while existing CNN visualization methods provide only a common attention mask for all features. We apply the proposed multi-channel attention mechanism to multi-attribute recognition task. We can obtain different attention mask for each feature and for each attribute. Those analyses give us deeper insight into the feature space of CNNs. The experimental results for the benchmark dataset show that the proposed method gives high interpretability to humans while accurately grasping the attributes of the data.

Related Material


[pdf] [dataset]
[bibtex]
@InProceedings{Kimura_2019_CVPR_Workshops,
author = {Kimura, Masanari and Tanaka, Masayuki},
title = {Interpretation of Feature Space using Multi-Channel Attentional Sub-Networks},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}