Adaptive Channel Sparsity for Federated Learning Under System Heterogeneity

Dongping Liao, Xitong Gao, Yiren Zhao, Cheng-Zhong Xu; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 20432-20441

Abstract


Owing to the non-i.i.d. nature of client data, channel neurons in federated-learned models may specialize to distinct features for different clients. Yet, existing channel-sparse federated learning (FL) algorithms prescribe fixed sparsity strategies for client models, and may thus prevent clients from training channel neurons collaboratively. To minimize the impact of sparsity on FL convergence, we propose Flado to improve the alignment of client model update trajectories by tailoring the sparsities of individual neurons in each client. Empirical results show that while other sparse methods are surprisingly impactful to convergence, Flado can not only attain the highest task accuracies with unlimited budget across a range of datasets, but also significantly reduce the amount of FLOPs required for training more than by 10x under the same communications budget, and push the Pareto frontier of communication/computation trade-off notably further than competing FL algorithms.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Liao_2023_CVPR, author = {Liao, Dongping and Gao, Xitong and Zhao, Yiren and Xu, Cheng-Zhong}, title = {Adaptive Channel Sparsity for Federated Learning Under System Heterogeneity}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2023}, pages = {20432-20441} }