Countering Personalized Text-to-Image Generation with Influence Watermarks

Hanwen Liu, Zhicheng Sun, Yadong Mu; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 12257-12267

Abstract


State-of-the-art personalized text-to-image generation systems are usually trained on a few reference images to learn novel visual representations. However this is likely to incur infringement of copyright for the reference image owners when these images are personal and publicly available. Recent progress has been made in protecting these images from unauthorized use by adding protective noises. Yet current protection methods work under the assumption that these protected images are not changed which is in contradiction to the fact that most public platforms intend to modify user-uploaded content e.g. image compression. This paper introduces a robust watermarking method namely InMark to protect images from unauthorized learning. Inspired by influence functions the proposed method forges protective watermarks on more important pixels for these reference images from both heuristic and statistical perspectives. In this way the personal semantics of these images are under protection even if these images are modified to some extent. Extensive experiments demonstrate that the proposed InMark outperforms previous state-of-the-art methods in both protective performance and robustness.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Liu_2024_CVPR, author = {Liu, Hanwen and Sun, Zhicheng and Mu, Yadong}, title = {Countering Personalized Text-to-Image Generation with Influence Watermarks}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {12257-12267} }