BrainWash: A Poisoning Attack to Forget in Continual Learning

Ali Abbasi, Parsa Nooralinejad, Hamed Pirsiavash, Soheil Kolouri; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 24057-24067

Abstract


Continual learning has gained substantial attention within the deep learning community offering promising solutions to the challenging problem of sequential learning. Yet a largely unexplored facet of this paradigm is its susceptibility to adversarial attacks especially with the aim of inducing forgetting. In this paper we introduce "BrainWash" a novel data poisoning method tailored to impose forgetting on a continual learner. By adding the BrainWash noise to a variety of baselines we demonstrate how a trained continual learner can be induced to forget its previously learned tasks catastrophically even when using these continual learning baselines. An important feature of our approach is that the attacker requires no access to previous tasks' data and is armed merely with the model's current parameters and the data belonging to the most recent task. Our extensive experiments highlight the efficacy of BrainWash showcasing degradation in performance across various regularization and memory replay-based continual learning methods. Our code is available here: https://github.com/mint-vu/Brainwash

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Abbasi_2024_CVPR, author = {Abbasi, Ali and Nooralinejad, Parsa and Pirsiavash, Hamed and Kolouri, Soheil}, title = {BrainWash: A Poisoning Attack to Forget in Continual Learning}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {24057-24067} }