SafetyNet: Detecting and Rejecting Adversarial Examples Robustly
Jiajun Lu, Theerasit Issaranon, David Forsyth; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 446-454
Abstract
We describe a method to produce a network where current methods such as DeepFool have great difficulty producing adversarial samples. Our construction suggests some insights into how deep networks work. We provide a reasonable analyses that our construction is difficult to defeat, and show experimentally that our method is hard to defeat with both Type I and Type II attacks using several standard networks and datasets. This SafetyNet architecture is used to an important and novel application SceneProof, which can reliably detect whether an image is a picture of a real scene or not. SceneProof applies to images captured with depth maps (RGBD images) and checks if a pair of image and depth map is consistent. It relies on the relative difficulty of producing naturalistic depth maps for images in post processing. We demonstrate that our SafetyNet is robust to adversarial examples built from currently known attacking approaches.
Related Material
[pdf]
[supp]
[arXiv]
[
bibtex]
@InProceedings{Lu_2017_ICCV,
author = {Lu, Jiajun and Issaranon, Theerasit and Forsyth, David},
title = {SafetyNet: Detecting and Rejecting Adversarial Examples Robustly},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}