Enhancing Image Classification Robustness through Adversarial Sampling with Delta Data Augmentation (DDA)

Ivan Reyes-Amezcua, Gilberto Ochoa-Ruiz, Andres Mendez-Vazquez; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 274-283

Abstract


Deep learning models are susceptible to adversarial attacks highlighting the critical need for enhanced adversarial robustness. Recent studies have shown that minor alterations to the input can significantly affect the model's prediction accuracy making it prone to such attacks. In our study we present the Delta Data Augmentation (DDA) technique a novel approach to improving transfer adversarial robustness by using perturbations derived from models trained to resist adversarial threats. Unlike conventional methods that attack the model directly our approach sources adversarial perturbations from higher-level tasks and integrates them into the training of new tasks. This strategy aims to increase both the robustness and the adversarial diversity of the datasets. Through extensive empirical testing we showcase the superiority of our data augmentation strategy over existing leading methods in enhancing adversarial robustness. This is particularly evident in our evaluations using Projected Gradient Descent (PGD) attacks with L2 and L-infinity norms on datasets such as CIFAR10 CIFAR100 SVHN MNIST and FashionMNIST.

Related Material


[pdf]
[bibtex]
@InProceedings{Reyes-Amezcua_2024_CVPR, author = {Reyes-Amezcua, Ivan and Ochoa-Ruiz, Gilberto and Mendez-Vazquez, Andres}, title = {Enhancing Image Classification Robustness through Adversarial Sampling with Delta Data Augmentation (DDA)}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {274-283} }