Explainability-Aware One Point Attack for Point Cloud Neural Networks

Hanxiao Tan, Helena Kotthaus; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 4581-4590

Abstract


Recent studies have shown an increased interest to investigate the reliability of point cloud networks by adversarial attacks. However, most of the existing studies aim to deceive humans, while few address the operation principles of the models themselves. In this work, we propose two adversarial methods: One Point Attack (OPA) and Critical Traversal Attack (CTA), which target the points crucial to predictions more precisely by incorporating explainability methods. Our results show that popular point cloud networks can be deceived with almost 100% success rate by shifting only one point from the input instance. We also show the interesting impact of different point attribution distributions on the adversarial robustness of point cloud networks. We discuss how our approaches facilitate the explainability study for point cloud networks. To the best of our knowledge, this is the first point-cloud-based adversarial approach concerning explainability. Our code is available at https://github.com/Explain3D/Exp-One-Point-Atk-PC.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Tan_2023_WACV, author = {Tan, Hanxiao and Kotthaus, Helena}, title = {Explainability-Aware One Point Attack for Point Cloud Neural Networks}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2023}, pages = {4581-4590} }