Are All Users Treated Fairly in Federated Learning Systems?

Umberto Michieli, Mete Ozay; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2021, pp. 2318-2322

Abstract


Federated Learning (FL) systems target distributed model training on decentralized and private local training data belonging to users. Most of the existing methods aggregate models prioritizing among them proportionally to the frequency of local samples. However, this leads to unfair aggregation with respect to users. Indeed, users with few local samples are considered less during aggregation and struggle to offer a real contribution to federated optimization of the models. In real-world settings, statistical heterogeneity (e.g., highly imbalanced and non-i.i.d. data) is diffused and can seriously harm model training. To this end, we empirically analyze the relationship between fairness of aggregation of user models, accuracy of aggregated models and convergence rate of FL methods. We compare a standard federated model aggregation and optimization method, FedAvg, against a fair (uniform) aggregation scheme, i.e., FairAvg on benchmark datasets. Experimental analyses show that fair model aggregation can be beneficial in terms of accuracy and convergence rate, whilst reducing at the same time fluctuations of accuracy of the aggregate model when clients observe non-i.i.d. data.

Related Material


[pdf]
[bibtex]
@InProceedings{Michieli_2021_CVPR, author = {Michieli, Umberto and Ozay, Mete}, title = {Are All Users Treated Fairly in Federated Learning Systems?}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {2318-2322} }