Polishing Decision-Based Adversarial Noise With a Customized Sampling

Yucheng Shi, Yahong Han, Qi Tian; The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 1030-1038

Abstract


As an effective black-box adversarial attack, decision-based methods polish adversarial noise by querying the target model. Among them, boundary attack is widely applied due to its powerful noise compression capability, especially when combined with transfer-based methods. Boundary attack splits the noise compression into several independent sampling processes, repeating each query with a constant sampling setting. In this paper, we demonstrate the advantage of using current noise and historical queries to customize the variance and mean of sampling in boundary attack to polish adversarial noise. We further reveal the relationship between the initial noise and the compressed noise in boundary attack. We propose Customized Adversarial Boundary (CAB) attack that uses the current noise to model the sensitivity of each pixel and polish adversarial noise of each image with a customized sampling setting. On the one hand, CAB uses current noise as a prior belief to customize the multivariate normal distribution. On the other hand, CAB keeps the new samplings away from historical failed queries to avoid similar mistakes. Experimental results measured on several image classification datasets emphasizes the validity of our method.

Related Material


[pdf]
[bibtex]
@InProceedings{Shi_2020_CVPR,
author = {Shi, Yucheng and Han, Yahong and Tian, Qi},
title = {Polishing Decision-Based Adversarial Noise With a Customized Sampling},
booktitle = {The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}