-
[pdf]
[supp]
[bibtex]@InProceedings{Dubinski_2024_WACV, author = {Dubi\'nski, Jan and Kowalczuk, Antoni and Pawlak, Stanis{\l}aw and Rokita, Przemyslaw and Trzci\'nski, Tomasz and Morawiecki, Pawe{\l}}, title = {Towards More Realistic Membership Inference Attacks on Large Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2024}, pages = {4860-4869} }
Towards More Realistic Membership Inference Attacks on Large Diffusion Models
Abstract
Generative diffusion models, including Stable Diffusion and Midjourney, can generate visually appealing, diverse, and high-resolution images for various applications. These models are trained on billions of internet-sourced images, raising significant concerns about the potential unauthorized use of copyright-protected images. In this paper, we examine whether it is possible to determine if a specific image was used in the training set, a problem known as a membership inference attack. Our focus is on Stable Diffusion, and we address the challenge of designing a fair evaluation framework to answer this membership question. We propose a new dataset to establish a fair evaluation setup and apply it to Stable Diffusion, also applicable to other generative models. With the proposed dataset, we execute membership attacks (both known and newly introduced). Our research reveals that previously proposed evaluation setups do not provide a full understanding of the effectiveness of membership inference attacks. We conclude that the membership inference attack remains a significant challenge for large diffusion models (often deployed as black-box systems), indicating that related privacy and copyright issues will persist in the foreseeable future.
Related Material