Mixed-Precision Quantization for Federated Learning on Resource-Constrained Heterogeneous Devices

Huancheng Chen, Haris Vikalo; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 6138-6148

Abstract


While federated learning (FL) systems often utilize quantization to battle communication and computational bottlenecks they have heretofore been limited to deploying fixed-precision quantization schemes. Meanwhile the concept of mixed-precision quantization (MPQ) where different layers of a deep learning model are assigned varying bit-width remains unexplored in the FL settings. We present a novel FL algorithm FedMPQ which introduces mixed-precision quantization to resource-heterogeneous FL systems. Specifically local models quantized so as to satisfy bit-width constraint are trained by optimizing an objective function that includes a regularization term which promotes reduction of precision in some of the layers without significant performance degradation. The server collects local model updates de-quantizes them into full-precision models and then aggregates them into a global model. To initialize the next round of local training the server relies on the information learned in the previous training round to customize bit-width assignments of the models delivered to different clients. In extensive benchmarking experiments on several model architectures and different datasets in both iid and non-iid settings FedMPQ outperformed the baseline FL schemes that utilize fixed-precision quantization while incurring only a minor computational overhead on the participating devices.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Chen_2024_CVPR, author = {Chen, Huancheng and Vikalo, Haris}, title = {Mixed-Precision Quantization for Federated Learning on Resource-Constrained Heterogeneous Devices}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {6138-6148} }