-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Thakral_2025_CVPR, author = {Thakral, Kartik and Glaser, Tamar and Hassner, Tal and Vatsa, Mayank and Singh, Richa}, title = {Fine-Grained Erasure in Text-to-Image Diffusion-based Foundation Models}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {9121-9130} }
Fine-Grained Erasure in Text-to-Image Diffusion-based Foundation Models
Abstract
Existing unlearning algorithms in text-to-image generative models often fail to preserve the knowledge of semantically related concepts when removing specific target concepts--a challenge known as adjacency. To address this, we propose FADE (Fine-grained Attenuation for Diffusion Erasure), introducing adjacency-aware unlearning in diffusion models. FADE comprises two components: (1) the Concept Neighborhood, which identifies an adjacency set of related concepts, and (2) Mesh Modules, employing a structured combination of Expungement, Adjacency, and Guidance loss components. These enable precise erasure of target concepts while preserving fidelity across related and unrelated concepts. Evaluated on datasets like Stanford Dogs, Oxford Flowers, CUB, I2P, Imagenette, and ImageNet-1k, FADE effectively removes target concepts with minimal impact on correlated concepts, achieving at least a 12% improvement in retention performance over state-of-the-art methods. Our code and models are available on the project page: iab-rubric/unlearning/FG-Un.
Related Material