Mixing Gradients in Neural Networks as a Strategy To Enhance Privacy in Federated Learning

Shaltiel Eloul, Fran Silavong, Sanket Kamthe, Antonios Georgiadis, Sean J. Moran; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024, pp. 3956-3965

Abstract


Federated learning reduces the risk of information leakage, but remains vulnerable to attack. We show that well-mixed gradients provide numerical resistance to gradient inversion in neural networks. For example, we can enhance mixing gradients in a batch by choosing an appropriate loss function and drawing identical labels, and we support this with an approximate solution of batch inversion for linear layers. These simple architecture choices show no degradation to classification performance as opposed to noise perturbation defense. To accurately assess data recovery, we propose to use a variation distance metric for information leakage in images, derived from total variation. In contrast to Mean Squared Error or Structural Similarity Index metrics, it provides a continuous metric for information recovery. Finally, our empirical results of information recovery from various inversion attacks and training performance supports our defense strategies. These simple architecture choices found to be also useful for practical size of convolutional neural networks but depends on their size. We hope this work will trigger further defense studies using gradient mixing, towards achieving a trustful federation policy.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Eloul_2024_WACV, author = {Eloul, Shaltiel and Silavong, Fran and Kamthe, Sanket and Georgiadis, Antonios and Moran, Sean J.}, title = {Mixing Gradients in Neural Networks as a Strategy To Enhance Privacy in Federated Learning}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2024}, pages = {3956-3965} }