Semantic Shield: Defending Vision-Language Models Against Backdooring and Poisoning via Fine-grained Knowledge Alignment

Alvi Md Ishmam, Christopher Thomas; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 24820-24830

Abstract


In recent years there has been enormous interest in vision-language models trained using self-supervised objectives. However the use of large-scale datasets scraped from the web for training also makes these models vulnerable to potential security threats such as backdooring and poisoning attacks. In this paper we propose a method for mitigating such attacks on contrastively trained vision-language models. Our approach leverages external knowledge extracted from a language model to prevent models from learning correlations between image regions which lack strong alignment with external knowledge. We do this by imposing constraints to enforce that attention paid by the model to visual regions is proportional to the alignment of those regions with external knowledge. We conduct extensive experiments using a variety of recent backdooring and poisoning attacks on multiple datasets and architectures. Our results clearly demonstrate that our proposed approach is highly effective at defending against such attacks across multiple settings while maintaining model utility and without requiring any changes at inference time.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Ishmam_2024_CVPR, author = {Ishmam, Alvi Md and Thomas, Christopher}, title = {Semantic Shield: Defending Vision-Language Models Against Backdooring and Poisoning via Fine-grained Knowledge Alignment}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {24820-24830} }