Towards Robust Model Watermark via Reducing Parametric Vulnerability

Guanhao Gan, Yiming Li, Dongxian Wu, Shu-Tao Xia; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 4751-4761


Deep neural networks are valuable assets considering their commercial benefits and huge demands for costly annotation and computation resources. To protect the copyright of DNNs, backdoor-based ownership verification becomes popular recently, in which the model owner can watermark the model by embedding a specific backdoor behavior before releasing it. The defenders (usually the model owners) can identify whether a suspicious third-party model is "stolen" from them based on the presence of the behavior. Unfortunately, these watermarks are proven to be vulnerable to removal attacks even like fine-tuning. To further explore this vulnerability, we investigate the parametric space and find there exist many watermark-removed models in the vicinity of the watermarked one, which may be easily used by removal attacks. Inspired by this finding, we propose a minimax formulation to find these watermark-removed models and recover their watermark behavior. Extensive experiments demonstrate that our method improves the robustness of the model watermarking against parametric changes and numerous watermark-removal attacks. The codes for reproducing our main experiments are available at

Related Material

[pdf] [supp] [arXiv]
@InProceedings{Gan_2023_ICCV, author = {Gan, Guanhao and Li, Yiming and Wu, Dongxian and Xia, Shu-Tao}, title = {Towards Robust Model Watermark via Reducing Parametric Vulnerability}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {4751-4761} }