Ask, Acquire, and Attack: Data-free UAP Generation using Class Impressions

Konda Reddy Mopuri, Phani Krishna Uppala, R. Venkatesh Babu; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 19-34

Abstract


Deep learning models are susceptible to input specific noise, called adversarial perturbations. Moreover, there exist input-agnostic noise, called Universal Adversarial Perturbations (UAP) that can affect inference of the models over most input samples. Given a model, there exist broadly two approaches to craft UAPs: (i) data-driven: that require data, and (ii) data-free: that do not require data samples. Data-driven approaches require actual samples from the underlying data distribution and craft UAPs with high success (fooling) rate. However, data-free approaches craft UAPs without utilizing any data samples and therefore result in lesser success rates. In this paper, for data-free scenarios, we propose a novel approach that emulates the effect of data samples with class impressions in order to craft UAPs using data-driven objectives. Class impression for a given pair of category and model is a generic representation (in the input space) of the samples belonging to that category. Further, we present a neural network based generative model that utilizes the acquired class impressions to learn crafting UAPs. Experimental evaluation demonstrates that the learned generative model, (i) readily crafts UAPs via simple feed-forwarding through neural network layers, and (ii) achieves state-of-the-art success rates for data-free scenario and closer to that for data-driven setting without actually utilizing any data samples.

Related Material


[pdf]
[bibtex]
@InProceedings{Mopuri_2018_ECCV,
author = {Mopuri, Konda Reddy and Uppala, Phani Krishna and Babu, R. Venkatesh},
title = {Ask, Acquire, and Attack: Data-free UAP Generation using Class Impressions},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}