Data-free Defense of Black Box Models Against Adversarial Attacks

Gaurav Kumar Nayak, Inder Khatri, Ruchit Rawal, Anirban Chakraborty; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 254-263

Abstract


Several companies often safeguard their trained deep models (i.e. details of architecture learnt weights training details etc.) from third-party users by exposing them only as 'black boxes' through APIs. Moreover they may not even provide access to the training data due to proprietary reasons or sensitivity concerns. In this work we propose a novel defense mechanism for black box models against adversarial attacks in a data-free set up. We construct synthetic data via a generative model and train surrogate network using model stealing techniques. To minimize adversarial contamination on perturbed samples we propose 'wavelet noise remover' (WNR) that performs discrete wavelet decomposition on input images and carefully select only a few important coefficients determined by our 'wavelet coefficient selection module' (WCSM). To recover the high-frequency content of the image after noise removal via WNR we further train a 'regenerator' network with an objective to retrieve the coefficients such that the reconstructed image yields similar to original predictions on the surrogate model. At test time WNR combined with trained regenerator network is prepended to the black box network resulting in a high boost in adversarial accuracy. Our method improves the adversarial accuracy on CIFAR-10 by 38.98% and 32.01% against the state-of-the-art Auto Attack compared to baseline even when the attacker uses surrogate architecture (Alexnet-half and Alexnet) similar to the black box architecture (Alexnet) with same model stealing strategy as defender.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Nayak_2024_CVPR, author = {Nayak, Gaurav Kumar and Khatri, Inder and Rawal, Ruchit and Chakraborty, Anirban}, title = {Data-free Defense of Black Box Models Against Adversarial Attacks}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {254-263} }