Forget-Me-Not: Learning to Forget in Text-to-Image Diffusion Models

Gong Zhang, Kai Wang, Xingqian Xu, Zhangyang Wang, Humphrey Shi; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 1755-1764

Abstract


The significant advances in applications of text-to-image generation models have prompted the demand of a post-hoc adaptation algorithms that can efficiently remove unwanted concepts (e.g. privacy copyright and safety) from a pretrained models with minimal influence on the existing knowledge system learned from pretraining. Existing methods mainly resort to explicitly finetuning unwanted concepts to be some alternatives such as their hypernyms or antonyms. Essentially they are modifying the knowledge system of pretrained models by replacing unwanted to be something arbitrarily defined by user. Furthermore these methods require hundreds of optimization steps as they solely rely on denoising loss used for pretraining. To address above challenges we propose Forget-Me-Not a model-centric and efficient solution designed to remove identities objects or styles from a well-configured text-to-image model in as little as 30 seconds without significantly impairing its ability to generate other content. In contrast to existing methods we introduce attention re-steering loss to redirect model's generation from unwanted concepts to those are learned during pretraining rather than being user-defined. Furthermore our method offers two practical extensions: a) removal of potentially harmful or NSFW content and b) enhancement of model accuracy inclusion and diversity through concept correction and disentanglement.

Related Material


[pdf]
[bibtex]
@InProceedings{Zhang_2024_CVPR, author = {Zhang, Gong and Wang, Kai and Xu, Xingqian and Wang, Zhangyang and Shi, Humphrey}, title = {Forget-Me-Not: Learning to Forget in Text-to-Image Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {1755-1764} }