CLEAR: Clean-Up Sample-Targeted Backdoor in Neural Networks

Liuwan Zhu, Rui Ning, Chunsheng Xin, Chonggang Wang, Hongyi Wu; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 16453-16462

Abstract


The data poisoning attack has raised serious security concerns on the safety of deep neural networks since it can lead to neural backdoor that misclassifies certain inputs crafted by an attacker. In particular, the sample-targeted backdoor attack is a new challenge. It targets at one or a few specific samples, called target samples, to misclassify them to a target class. Without a trigger planted in the backdoor model, the existing backdoor detection schemes fail to detect the sample-targeted backdoor as they depend on reverse-engineering the trigger or strong features of the trigger. In this paper, we propose a novel scheme to detect and mitigate sample-targeted backdoor attacks. We discover and demonstrate a unique property of the sample-targeted backdoor, which forces a boundary change such that small "pockets" are formed around the target sample. Based on this observation, we propose a novel defense mechanism to pinpoint a malicious pocket by "wrapping" them into a tight convex hull in the feature space. We design an effective algorithm to search for such a convex hull and remove the backdoor by fine-tuning the model using the identified malicious samples with the corrected label according to the convex hull. The experiments show that the proposed approach is highly efficient for detecting and mitigating a wide range of sample-targeted backdoor attacks.

Related Material


[pdf]
[bibtex]
@InProceedings{Zhu_2021_ICCV, author = {Zhu, Liuwan and Ning, Rui and Xin, Chunsheng and Wang, Chonggang and Wu, Hongyi}, title = {CLEAR: Clean-Up Sample-Targeted Backdoor in Neural Networks}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {16453-16462} }