Towards Efficient Machine Unlearning with Data Augmentation: Guided Loss-Increasing (GLI) to Prevent the Catastrophic Model Utility Drop

Dasol Choi, Soora Choi, Eunsun Lee, Jinwoo Seo, Dongbin Na; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 93-102

Abstract


Machine unlearning algorithms aim to make a model forget specific data that might be used in the training phase. To solve this problem various studies have adopted loss-increasing methods. For example some unlearning methods have presented data augmentation methods to generate synthesized images that maximize loss values for images to be forgotten. In contrast some unlearning methods directly update the model in the direction of increasing loss for the images to be forgotten. In this paper we first revisit these loss-increasing methods and analyze their limitations. We have found that these simple loss-increasing strategies can be effective in the aspect of the forgetting score however can hurt the original model utility unexpectedly we call this phenomenon catastrophic model utility drop. We propose a novel data augmentation method Guided Loss-Increasing (GLI) that restricts the direction of the data update to resolve the utility drop issue. This is achieved by aligning updates with the model's existing knowledge thereby ensuring that the unlearning process does not adversely affect the model's original performance. Our extensive experiments demonstrate our method shows superior (1) model utility and (2) forgetting performance compared to the previous state-of-the-art (SOTA) methods. Furthermore we demonstrate Jensen-Shannon divergence can be utilized to robustly evaluate the forgetting score. All the source codes and trained model weights will be made publicly available upon paper acceptance.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Choi_2024_CVPR, author = {Choi, Dasol and Choi, Soora and Lee, Eunsun and Seo, Jinwoo and Na, Dongbin}, title = {Towards Efficient Machine Unlearning with Data Augmentation: Guided Loss-Increasing (GLI) to Prevent the Catastrophic Model Utility Drop}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {93-102} }