-
[pdf]
[bibtex]@InProceedings{Phan_2025_WACV, author = {Phan, Huy and Huang, Boshi and Jaiswal, Ayush and Sabir, Ekraam and Singhal, Prateek and Yuan, Bo}, title = {Latent Diffusion Shield - Mitigating Malicious Use of Diffusion Models through Latent Space Adversarial Perturbations}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV) Workshops}, month = {February}, year = {2025}, pages = {1440-1448} }
Latent Diffusion Shield - Mitigating Malicious Use of Diffusion Models through Latent Space Adversarial Perturbations
Abstract
Diffusion models have revolutionized the landscape of generative AI particularly in the application of text-to-image generation. However their powerful capability of generating high-fidelity images raises significant security concerns on the malicious use of the state-of-the-art (SOTA) text-to-image diffusion models notably the risks of misusing personal photos and copyright infringement through the replication of human faces and art styles. Existing protection methods against such threats often suffer from lack of generalization poor performance and high computational demands rendering them unsuitable for real-time or resource-constrained environments. Addressing these challenges we introduce the Latent Diffusion Shield (LDS) a novel protection approach designed to operate within the latent space of diffusion models thereby offering robust defense against unauthorized diffusion-based image synthesis. We validate LDS's performance through extensive experiments across multiple personalized diffusion models and datasets establishing new benchmarks in image protection against the malicious use of diffusion models. Notably the generative version of LDS provides SOTA protection while being 150x faster and using 2.6x less memory.
Related Material