Explaining Failure: Investigation of Surprise and Expectation in CNNs

Thomas Hartley, Kirill Sidorov, Christopher Willis, David Marshall; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020, pp. 12-13

Abstract


As Convolutional Neural Networks (CNNs) have expanded into every day use, more rigorous methods of explaining their inner workings are required. Current popular techniques, such as saliency maps, show how a network interprets an input image at a simple level by scoring pixels according to their importance. In this paper, we introduce the concept of surprise and expectation as means for exploring and visualising how a network learns to model the training data through the understanding of filter activations. We show that this is a powerful technique for understanding how the network reacts to an unseen image compared to the training data. We also show that the insights provided by our technique allows us to `fix' misclassifications. Our technique can be used with nearly all types of CNN. We evaluate our method both qualitatively and quantitatively using ImageNet.

Related Material


[pdf]
[bibtex]
@InProceedings{Hartley_2020_CVPR_Workshops,
author = {Hartley, Thomas and Sidorov, Kirill and Willis, Christopher and Marshall, David},
title = {Explaining Failure: Investigation of Surprise and Expectation in CNNs},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2020}
}