Scalable Differential Privacy With Sparse Network Finetuning

Zelun Luo, Daniel J. Wu, Ehsan Adeli, Li Fei-Fei; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 5059-5068

Abstract


We propose a novel method for privacy-preserving training of deep neural networks leveraging public, out-domain data. While differential privacy (DP) has emerged as a mechanism to protect sensitive data in training datasets, its application to complex visual recognition tasks remains challenging. Traditional DP methods, such as Differentially-Private Stochastic Gradient Descent (DP-SGD), only perform well on simple datasets and shallow networks, while recent transfer learning-based DP methods often make unrealistic assumptions about the availability and distribution of public data. In this work, we argue that minimizing the number of trainable parameters is the key to improving the privacy-performance tradeoff of DP on complex visual recognition tasks. We also propose a novel transfer learning paradigm that finetunes a very sparse subnetwork with DP, inspired by this argument. We conduct extensive experiments and ablation studies on two visual recognition tasks: CIFAR-100 -> CIFAR-10 (standard DP setting) and the CD-FSL challenge (few-shot, multiple levels of domain shifts) and demonstrate competitive experimental performance.

Related Material


[pdf]
[bibtex]
@InProceedings{Luo_2021_CVPR, author = {Luo, Zelun and Wu, Daniel J. and Adeli, Ehsan and Fei-Fei, Li}, title = {Scalable Differential Privacy With Sparse Network Finetuning}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {5059-5068} }