Towards Privacy-Preserving Split Learning for ControlNet

Dixi Yao; Proceedings of the Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 139-148

Abstract


With the emerging trend of large generative models ControlNet is introduced to enable users to fine-tune pre-trained models with their own data for various use cases. A natural question arises: how can we train ControlNet models while ensuring users' data privacy across distributed devices? We first propose a new distributed learning structure that eliminates the need for the server to send gradients based on split learning. We discover that in the context of fine-tuning ControlNet with split learning most existing attacks are ineffective except for two mentioned in previous literature. To counter these threats we leverage the properties of diffusion models and design a new timestep sampling policy during forward processes. We also propose a privacy-preserving activation function and a method to prevent private text prompts from leaving clients tailored for image generation with diffusion models. Our experimental results demonstrate that our algorithms and systems greatly enhance the efficiency of distributed fine-tuning for ControlNet while ensuring users' data privacy without compromising image generation quality.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Yao_2025_WACV, author = {Yao, Dixi}, title = {Towards Privacy-Preserving Split Learning for ControlNet}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {139-148} }