-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Chen_2022_WACV, author = {Chen, Zhenhua and Wang, Chuhua and Crandall, David}, title = {Semantically Stealthy Adversarial Attacks Against Segmentation Models}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2022}, pages = {4080-4089} }
Semantically Stealthy Adversarial Attacks Against Segmentation Models
Abstract
Segmentation models have been found to be vulnerable to targeted/non-targeted adversarial attacks. However, damaged predictions make it easy to unearth an attack. In this paper, we propose semantically stealthy adversarial attacks which can manipulate targeted labels as designed and preserve non-targeted labels at the same time. In this way, we may hide the corresponding attack behaviors. One challenge is making semantically meaningful manipulations across datasets/models. Another challenge is avoiding damaging non-targeted labels. To solve the above challenges, we consider each input image as prior knowledge to generate perturbations. We also design a special regularizer to help extract features. To evaluate our model's performance, we design three basic attack types, namely `vanishing into the context', `embedding fake labels', and `displacing target objects'. The experiments show that our stealthy adversarial model can attack segmentation models with a relatively high success rate on Cityscapes, Mapillary, and BDD100K. Finally, our framework also shows good generalizations across datasets/models empirically.
Related Material