Zero-Shot Class Unlearning in CLIP with Synthetic Samples

Alexey Kravets, Vinay Namboodiri; Proceedings of the Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 6456-6464

Abstract


Machine unlearning is a crucial area of research. It is driven by the need to remove sensitive information from models to safeguard individuals' right to be forgotten under rigorous regulations such as GDPR. In this work we focus on unlearning within CLIP a dual vision-language encoder model trained on a massive dataset of image-text pairs using contrastive loss. To achieve forgetting we expand the application of Lipschitz regularization to the multimodal context of CLIP. Specifically we smooth both visual and textual embeddings associated with the class intended to be forgotten relative to the perturbation introduced to the samples from that class. Additionally importantly we remove the necessity for real forgetting data by generating synthetic samples via gradient ascent maximizing the target class. Our forgetting procedure is iterative where we track accuracy on a synthetic forget set and stop when accuracy falls below a chosen threshold. We employ a selective layers update strategy based on their average absolute gradient value to mitigate over-forgetting. We validate our approach on several standard datasets and provide thorough ablation analysis and comparisons with previous work.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Kravets_2025_WACV, author = {Kravets, Alexey and Namboodiri, Vinay}, title = {Zero-Shot Class Unlearning in CLIP with Synthetic Samples}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {6456-6464} }