Manipulating Transfer Learning for Property Inference

Yulong Tian, Fnu Suya, Anshuman Suri, Fengyuan Xu, David Evans; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 15975-15984

Abstract


Transfer learning is a popular method for tuning pretrained (upstream) models for different downstream tasks using limited data and computational resources. We study how an adversary with control over an upstream model used in transfer learning can conduct property inference attacks on a victim's tuned downstream model. For example, to infer the presence of images of a specific individual in the downstream training set. We demonstrate attacks in which an adversary can manipulate the upstream model to conduct highly effective and specific property inference attacks (AUC score > 0.9), without incurring significant performance loss on the main task. The main idea of the manipulation is to make the upstream model generate activations (intermediate features) with different distributions for samples with and without a target property, thus enabling the adversary to distinguish easily between downstream models trained with and without training examples that have the target property. Our code is available at https://github.com/yulongt23/Transfer-Inference.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Tian_2023_CVPR, author = {Tian, Yulong and Suya, Fnu and Suri, Anshuman and Xu, Fengyuan and Evans, David}, title = {Manipulating Transfer Learning for Property Inference}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2023}, pages = {15975-15984} }