Data Poisoning based Backdoor Attacks to Contrastive Learning

Jinghuai Zhang, Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 24357-24366

Abstract


Contrastive learning (CL) pre-trains general-purpose encoders using an unlabeled pre-training dataset which consists of images or image-text pairs. CL is vulnerable to data poisoning based backdoor attacks (DPBAs) in which an attacker injects poisoned inputs into the pre-training dataset so the encoder is backdoored. However existing DPBAs achieve limited effectiveness. In this work we take the first step to analyze the limitations of existing backdoor attacks and propose new DPBAs called CorruptEncoder to CL. CorruptEncoder introduces a new attack strategy to create poisoned inputs and uses a theory-guided method to maximize attack effectiveness. Our experiments show that CorruptEncoder substantially outperforms existing DPBAs. In particular CorruptEncoder is the first DPBA that achieves more than 90% attack success rates with only a few (3) reference images and a small poisoning ratio (0.5%). Moreover we also propose a defense called localized cropping to defend against DPBAs. Our results show that our defense can reduce the effectiveness of DPBAs but it sacrifices the utility of the encoder highlighting the need for new defenses.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Zhang_2024_CVPR, author = {Zhang, Jinghuai and Liu, Hongbin and Jia, Jinyuan and Gong, Neil Zhenqiang}, title = {Data Poisoning based Backdoor Attacks to Contrastive Learning}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {24357-24366} }