On the Sensitivity of Adversarial Robustness to Input Data Distributions

Gavin Weiguang Ding, Kry Yik Chau Lui, Xiaomeng Jin, Luyu Wang, Ruitong Huang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019, pp. 13-16

Abstract


Neural networks are vulnerable to small adversarial perturbations. While existing literature largely focused on the vulnerability of learned models, we demonstrate an intriguing phenomenon that adversarial robustness, unlike clean accuracy, is sensitive to the input data distribution. Even a semantics-preserving transformations on the input data distribution can cause a significantly different robustness for the adversarially trained model that is both trained and evaluated on the new distribution. We show this by constructing semantically-identical variants for MNIST and CIFAR10 respectively, and show that standardly trained models achieve similar clean accuracies on them, but adversarially trained models achieve significantly different robustness accuracies. This counter-intuitive phenomenon indicates that input data distribution alone can affect the adversarial robustness of trained neural networks, not necessarily the tasks themselves. The full paper (ICLR 2019) can be found at https://openreview.net/forum?id= S1xNEhR9KX.

Related Material


[pdf]
[bibtex]
@InProceedings{Ding_2019_CVPR_Workshops,
author = {Weiguang Ding, Gavin and Yik Chau Lui, Kry and Jin, Xiaomeng and Wang, Luyu and Huang, Ruitong},
title = {On the Sensitivity of Adversarial Robustness to Input Data Distributions},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}