Adversarial examples detection in features distance spaces

Fabio Carrara, Rudy Becarelli, Roberto Caldelli, Fabrizio Falchi, Giuseppe Amato; Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 2018, pp. 0-0


Maliciously manipulated inputs for attacking machine learning methods – in particular deep neural networks – are emerging as a relevant issue for the security of recent artificial intelligence technologies, especially in computer vision. In this paper, we focus on attacks targeting image classifiers implemented with deep neural networks, and we propose a method for detecting adversarial images which focuses on the trajectory of internal representations (i.e. hidden layers neurons activation, also known as deep features) from the very first, up to the last. We argue that the representations of adversarial inputs follow a different evolution with respect to genuine inputs, and we define a distance-based embedding of features to efficiently encode this information. We train an LSTM network that analyzes the sequence of deep features embedded in a distance space to detect adversarial examples. The results of our preliminary experiments are encouraging: our detection scheme is able to detect adversarial inputs targeted to the ResNet-50 classifier pretrained on the ILSVRC’12 dataset and generated by a variety of crafting algorithms.

Related Material

author = {Carrara, Fabio and Becarelli, Rudy and Caldelli, Roberto and Falchi, Fabrizio and Amato, Giuseppe},
title = {Adversarial examples detection in features distance spaces},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV) Workshops},
month = {September},
year = {2018}