How Many Dimensions Are Required To Find an Adversarial Example?

Charles Godfrey, Henry Kvinge, Elise Bishoff, Myles Mckay, Davis Brown, Tim Doster, Eleanor Byler; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2023, pp. 2353-2360

Abstract


Past work exploring adversarial vulnerability have focused on situations where an adversary can perturb all dimensions of model input. On the other hand, a range of recent works consider the case where either (i) an adversary can perturb a limited number of input parameters or (ii) a subset of modalities in a multimodal problem. In both of these cases, adversarial examples are effectively constrained to a subspaceVin the ambient input spaceX. Motivated by this, in this work we investigate how adversarial vulnerability depends on dim(V). In particular, we show that the adversarial success of standard PGD attacks with Lp norm constraints behaves like a monotonically increasing function of epsilon*(dim(V)/dimX)^q where epsilon is the perturbation budget and (1/p)+(1/q) = 1, provided p >1 (the case p= 1 presents additional subtleties which we analyze in some detail). This functional form can be easily derived from a simple toy linear model, and as such our results land further credence to arguments that adversarial examples are endemic to locally linear models on high dimensional spaces.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Godfrey_2023_CVPR, author = {Godfrey, Charles and Kvinge, Henry and Bishoff, Elise and Mckay, Myles and Brown, Davis and Doster, Tim and Byler, Eleanor}, title = {How Many Dimensions Are Required To Find an Adversarial Example?}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {2353-2360} }