-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Wang_2021_ICCV, author = {Wang, Xin and Lin, Shuyun and Zhang, Hao and Zhu, Yufei and Zhang, Quanshi}, title = {Interpreting Attributions and Interactions of Adversarial Attacks}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {1095-1104} }
Interpreting Attributions and Interactions of Adversarial Attacks
Abstract
This paper aims to explain adversarial attacks in terms of how adversarial perturbations contribute to the attacking task. We estimate attributions of different image regions to the decrease of the attacking cost based on the Shapley value. We define and quantify interactions among adversarial perturbation pixels, and decompose the entire perturbation map into relatively independent perturbation components. The decomposition of the perturbation map shows that adversarially-trained DNNs have more perturbation components in the foreground than normally-trained DNNs. Moreover, compared to the normally-trained DNN, the adversarially-trained DNN have more components which mainly decrease the score of the true category. Above analyses provide new insights into the understanding of adversarial attacks.
Related Material