Continual Forgetting for Pre-trained Vision Models

Hongbo Zhao, Bolin Ni, Junsong Fan, Yuxi Wang, Yuntao Chen, Gaofeng Meng, Zhaoxiang Zhang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 28631-28642

Abstract


For privacy and security concerns the need to erase unwanted information from pre-trained vision models is becoming evident nowadays. In real-world scenarios erasure requests originate at any time from both users and model owners. These requests usually form a sequence. Therefore under such a setting selective information is expected to be continuously removed from a pre-trained model while maintaining the rest. We define this problem as continual forgetting and identify two key challenges. (i) For unwanted knowledge efficient and effective deleting is crucial. (ii) For remaining knowledge the impact brought by the forgetting procedure should be minimal. To address them we propose Group Sparse LoRA (GS-LoRA). Specifically towards (i) we use LoRA modules to fine-tune the FFN layers in Transformer blocks for each forgetting task independently and towards (ii) a simple group sparse regularization is adopted enabling automatic selection of specific LoRA groups and zeroing out the others. GS-LoRA is effective parameter-efficient data-efficient and easy to implement. We conduct extensive experiments on face recognition object detection and image classification and demonstrate that GS-LoRA manages to forget specific classes with minimal impact on other classes. Codes will be released on https://github.com/bjzhb666/GS-LoRA.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Zhao_2024_CVPR, author = {Zhao, Hongbo and Ni, Bolin and Fan, Junsong and Wang, Yuxi and Chen, Yuntao and Meng, Gaofeng and Zhang, Zhaoxiang}, title = {Continual Forgetting for Pre-trained Vision Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {28631-28642} }