Universal Spectral Adversarial Attacks for Deformable Shapes

Arianna Rampini, Franco Pestarini, Luca Cosmo, Simone Melzi, Emanuele Rodola; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 3216-3226

Abstract


Machine learning models are known to be vulnerable to adversarial attacks, namely perturbations of the data that lead to wrong predictions despite being imperceptible. However, the existence of "universal" attacks (i.e., unique perturbations that transfer across different data points) has only been demonstrated for images to date. Part of the reason lies in the lack of a common domain, for geometric data such as graphs, meshes, and point clouds, where a universal perturbation can be defined. In this paper, we offer a change in perspective and demonstrate the existence of universal attacks for geometric data (shapes). We introduce a computational procedure that operates entirely in the spectral domain, where the attacks take the form of small perturbations to short eigenvalue sequences; the resulting geometry is then synthesized via shape-from-spectrum recovery. Our attacks are universal, in that they transfer across different shapes, different representations (meshes and point clouds), and generalize to previously unseen data.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Rampini_2021_CVPR, author = {Rampini, Arianna and Pestarini, Franco and Cosmo, Luca and Melzi, Simone and Rodola, Emanuele}, title = {Universal Spectral Adversarial Attacks for Deformable Shapes}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {3216-3226} }