-
[pdf]
[bibtex]@InProceedings{Chen_2025_CVPR, author = {Chen, Yunzhuo and Vice, Jordan and Akhtar, Naveed and Haldar, Nur and Mian, Ajmal}, title = {Dynamic watermarks in images generated by diffusion models}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops}, month = {June}, year = {2025}, pages = {5271-5277} }
Dynamic watermarks in images generated by diffusion models
Abstract
High-fidelity text-to-image diffusion models have revolutionized visual content generation, but their widespread use raises significant copyright concerns. To address these challenges, we propose a novel multi-stage watermarking framework for diffusion models, designed to establish copyright and trace generated images back to their source. Our multi-stage watermarking technique involves embedding: (i) a fixed watermark that is localized in the diffusion model's learned noise distribution and, (ii) a human-imperceptible, dynamic watermark in generates images, leveraging a fine-tuned decoder. By leveraging the Structural Similarity Index Measure (SSIM) and cosine similarity, we adapt the watermark's shape and color to the generated content while maintaining robustness. We demonstrate that our method enables reliable source verification through watermark classification, even when the dynamic watermark is adjusted for content-specific variations. Source model verification is enabled through watermark classification. o support further research, we generate a dataset of watermarked images and introduce a methodology to evaluate the statistical impact of watermarking on generated content.Additionally, we rigorously test our framework against various attack scenarios, demonstrating its robustness and minimal impact on image quality.
Related Material