Learn To Unlearn for Deep Neural Networks: Minimizing Unlearning Interference With Gradient Projection

Tuan Hoang, Santu Rana, Sunil Gupta, Svetha Venkatesh; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024, pp. 4819-4828

Abstract


Recent data-privacy laws have sparked interest in machine unlearning, which involves removing the effect of specific training samples from a learnt model as if they were never present in the original training dataset. The challenge of machine unlearning is to discard information about the "forget" data in the learnt model without altering the knowledge about the remaining dataset and to do so more efficiently than the naive retraining approach. To achieve this, we adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU), in which the model takes steps in the orthogonal direction to the gradient subspaces deemed unimportant for the retaining dataset, so as to its knowledge is preserved. By utilizing Stochastic Gradient Descent (SGD) to update the model weights, our method can efficiently scale to any model and dataset size. We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible. Our code is available at https://github.com/hnanhtuan/projected_gradient_unlearning.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Hoang_2024_WACV, author = {Hoang, Tuan and Rana, Santu and Gupta, Sunil and Venkatesh, Svetha}, title = {Learn To Unlearn for Deep Neural Networks: Minimizing Unlearning Interference With Gradient Projection}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2024}, pages = {4819-4828} }