Beating Backdoor Attack at Its Own Game

Min Liu, Alberto Sangiovanni-Vincentelli, Xiangyu Yue; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 4620-4629

Abstract


Deep neural networks (DNNs) are vulnerable to backdoor attack, which does not affect the network's performance on clean data but would manipulate the network behavior once a trigger pattern is added. Existing defense methods have greatly reduced attack success rate, but their prediction accuracy on clean data still lags behind a clean model by a large margin. Inspired by the stealthiness and effectiveness of backdoor attack, we propose a simple but highly effective defense framework which injects non-adversarial backdoors targeting poisoned samples. Following the general steps in backdoor attack, we detect a small set of suspected samples and then apply a poisoning strategy to them. The non-adversarial backdoor, once triggered, suppresses the attacker's backdoor on poisoned data, but has limited influence on clean data. The defense can be carried out during data preprocessing, without any modification to the standard end-to-end training pipeline. We conduct extensive experiments on multiple benchmarks with different architectures and representative attacks. Results demonstrate that our method achieves state-of-the-art defense effectiveness with by far the lowest performance drop on clean data. Considering the surprising defense ability displayed by our framework, we call for more attention to utilizing backdoor for backdoor defense. Code is available at https://github.com/damianliumin/non-adversarial_backdoor.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Liu_2023_ICCV, author = {Liu, Min and Sangiovanni-Vincentelli, Alberto and Yue, Xiangyu}, title = {Beating Backdoor Attack at Its Own Game}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {4620-4629} }