Semantic Adversarial Attacks: Parametric Transformations That Fool Deep Classifiers

Ameya Joshi, Amitangshu Mukherjee, Soumik Sarkar, Chinmay Hegde; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 4773-4783

Abstract


Deep neural networks have been shown to exhibit an intriguing vulnerability to adversarial input images corrupted with imperceptible perturbations. However, the majority of adversarial attacks assume global, fine-grained control over the image pixel space. In this paper, we consider a different setting: what happens if the adversary could only alter specific attributes of the input image? These would generate inputs that might be perceptibly different, but still natural-looking and enough to fool a classifier. We propose a novel approach to generate such "semantic" adversarial examples by optimizing a particular adversarial loss over the range-space of a parametric conditional generative model. We demonstrate implementations of our attacks on binary classifiers trained on face images, and show that such natural-looking semantic adversarial examples exist. We evaluate the effectiveness of our attack on synthetic and real data, and present detailed comparisons with existing attack methods. We supplement our empirical results with theoretical bounds that demonstrate the existence of such parametric adversarial examples.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Joshi_2019_ICCV,
author = {Joshi, Ameya and Mukherjee, Amitangshu and Sarkar, Soumik and Hegde, Chinmay},
title = {Semantic Adversarial Attacks: Parametric Transformations That Fool Deep Classifiers},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}