Neural Style Protection: Counteracting Unauthorized Neural Style Transfer

Yaxin Li, Jie Ren, Han Xu, Hui Liu; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024, pp. 3966-3975

Abstract


Arbitrary neural style transfer is an advanced AI technique that can effectively synthesize pictures with an artistic style similar to a given source picture. However, if such an AI technique is leveraged by unauthorized individuals, it can significantly infringe upon the copyright of the source picture's owner. In this paper, we study how to protect the artistic style of source images against unauthorized style transfer by adding imperceptible perturbations to the original source pictures. In particular, our goal is to disable the neural style transfer models from producing high-quality pictures with a similar style to the source pictures with slight manipulating the source images. We introduce Neural Style Protection (NSP), which provides protection for source images against various neural style transfer models. Through extensive experiments, we demonstrate the effectiveness and generalizability of the proposed style protection algorithm across numerous style transfer models using varied metrics.

Related Material


[pdf]
[bibtex]
@InProceedings{Li_2024_WACV, author = {Li, Yaxin and Ren, Jie and Xu, Han and Liu, Hui}, title = {Neural Style Protection: Counteracting Unauthorized Neural Style Transfer}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2024}, pages = {3966-3975} }