Barrage of Random Transforms for Adversarially Robust Defense

Edward Raff, Jared Sylvester, Steven Forsyth, Mark McLean; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 6528-6537

Abstract


Defenses against adversarial examples, when using the ImageNet dataset, are historically easy to defeat. The common understanding is that a combination of simple image transformations and other various defenses are insufficient to provide the necessary protection when the obfuscated gradient is taken into account. In this paper, we explore the idea of stochastically combining a large number of individually weak defenses into a single barrage of randomized transformations to build a strong defense against adversarial attacks. We show that, even after accounting for obfuscated gradients, the Barrage of Random Transforms (BaRT) is a resilient defense against even the most difficult attacks, such as PGD. BaRT achieves up to a 24x improvement in accuracy compared to previous work, and has even extended effectiveness out to a previously untested maximum adversarial perturbation of e=32.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Raff_2019_CVPR,
author = {Raff, Edward and Sylvester, Jared and Forsyth, Steven and McLean, Mark},
title = {Barrage of Random Transforms for Adversarially Robust Defense},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}