Membership Inference Attacks Are Easier on Difficult Problems

Avital Shafran, Shmuel Peleg, Yedid Hoshen; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 14820-14829

Abstract


Membership inference attacks (MIA) try to detect if data samples were used to train a neural network model, e.g. to detect copyright abuses. We show that models with higher dimensional input and output are more vulnerable to MIA, and address in more detail models for image translation and semantic segmentation, including medical image segmentation. We show that reconstruction-errors can lead to very effective MIA attacks as they are indicative of memorization. Unfortunately, reconstruction error alone is less effective at discriminating between non-predictable images used in training and easy to predict images that were never seen before. To overcome this, we propose using a novel predictability error that can be computed for each sample, and its computation does not require a training set. Our membership error, obtained by subtracting the predictability error from the reconstruction error, is shown to achieve high MIA accuracy on an extensive number of benchmarks.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Shafran_2021_ICCV, author = {Shafran, Avital and Peleg, Shmuel and Hoshen, Yedid}, title = {Membership Inference Attacks Are Easier on Difficult Problems}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {14820-14829} }