-
[pdf]
[bibtex]@InProceedings{Thakral_2025_ICCV, author = {Thakral, Kartik and Pathak, Shreyansh and Glaser, Tamar and Hassner, Tal and Garcia-Olano, Diego and Masi, Iacopo and Singh, Richa and Vatsa, Mayank}, title = {Genm: The Generative Machine Unlearning Challenge}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2025}, pages = {2533-2541} }
Genm: The Generative Machine Unlearning Challenge
Abstract
Generative machine unlearning has emerged as a critical requirement for the responsible deployment of text-to-image generative models, where the ability to erase specific visual concepts is essential for addressing concerns of privacy, copyright, and ethical use. Despite rapid progress in generative modeling, the field lacks standardized benchmarks to evaluate how effectively models can forget targeted concepts while retaining adjacent and unrelated knowledge. To fill this gap, we introduce the Gen\mu benchmark, which provides an extensive dataset of target, retain, and adjacent concepts, coupled with carefully engineered and adversarial prompts designed to probe unlearning robustness. To ensure fair and comprehensive assessment, we utilize the Erasing-Retention-Robustness score, a unified metric for capturing erasing accuracy, retention accuracy, adjacent-concept preservation, engineered-prompt robustness, and adversarial robustness. Alongside this benchmark, we establish detailed baselines using widely adopted unlearning algorithms, demonstrating the strengths and limitations of current approaches. By consolidating tasks such as single concept, multi-concept, and continuous unlearning in a unified framework, the Gen\mu benchmark provides the first rigorous foundation for systematic evaluation in this domain. It aims to catalyze future research on controllable and responsible generative models that can selectively forget while preserving generality and robustness.
Related Material
