Fooling Network Interpretation in Image Classification

Akshayvarun Subramanya, Vipin Pillai, Hamed Pirsiavash; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 2020-2029

Abstract


Deep neural networks have been shown to be fooled rather easily using adversarial attack algorithms. Practical methods such as adversarial patches have been shown to be extremely effective in causing misclassification. However, these patches are highlighted using standard network interpretation algorithms, thus revealing the identity of the adversary. We show that it is possible to create adversarial patches which not only fool the prediction, but also change what we interpret regarding the cause of the prediction. Moreover, we introduce our attack as a controlled setting to measure the accuracy of interpretation algorithms. We show this using extensive experiments for Grad-CAM interpretation that transfers to occluding patch interpretation as well. We believe our algorithms can facilitate developing more robust network interpretation tools that truly explain the network's underlying decision making process.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Subramanya_2019_ICCV,
author = {Subramanya, Akshayvarun and Pillai, Vipin and Pirsiavash, Hamed},
title = {Fooling Network Interpretation in Image Classification},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}