SAGA: Spectral Adversarial Geometric Attack on 3D Meshes

Tomer Stolik, Itai Lang, Shai Avidan; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 4284-4294

Abstract


A triangular mesh is one of the most popular 3D data representations. As such, the deployment of deep neural networks for mesh processing is widely spread and is increasingly attracting more attention. However, neural networks are prone to adversarial attacks, where carefully crafted inputs impair the model's functionality. The need to explore these vulnerabilities is a fundamental factor in the future development of 3D-based applications. Recently, mesh attacks were studied on the semantic level, where classifiers are misled to produce wrong predictions. Nevertheless, mesh surfaces possess complex geometric attributes beyond their semantic meaning, and their analysis often includes the need to encode and reconstruct the geometry of the shape. We propose a novel framework for a geometric adversarial attack on a 3D mesh autoencoder. In this setting, an adversarial input mesh deceives the autoencoder by forcing it to reconstruct a different geometric shape at its output. The malicious input is produced by perturbing a clean shape in the spectral domain. Our method leverages the spectral decomposition of the mesh along with additional mesh-related properties to obtain visually credible results that consider the delicacy of surface distortions.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Stolik_2023_ICCV, author = {Stolik, Tomer and Lang, Itai and Avidan, Shai}, title = {SAGA: Spectral Adversarial Geometric Attack on 3D Meshes}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {4284-4294} }