-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Xu_2025_WACV, author = {Xu, Jiahao and Zhang, Zikai and Hu, Rui}, title = {Achieving Byzantine-Resilient Federated Learning via Layer-Adaptive Sparsified Model Aggregation}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {1508-1517} }
Achieving Byzantine-Resilient Federated Learning via Layer-Adaptive Sparsified Model Aggregation
Abstract
Federated Learning (FL) enables multiple clients to train a model collaboratively without sharing their local data. Yet the FL system is vulnerable to well-designed Byzantine attacks which aim to disrupt the model training process by uploading malicious model updates. Existing robust aggregation rule-based defense methods overlook the diversity of magnitude and direction across different layers of the model updates resulting in limited robustness performance particularly in non-IID settings. To address these challenges we propose the Layer-Adaptive Sparsified Model Aggregation (LASA) approach which combines pre-aggregation sparsification with layer-wise adaptive aggregation to improve robustness. Specifically LASA includes a pre-aggregation sparsification module that sparsifies updates from each client before aggregation reducing the impact of malicious parameters and minimizing the interference from less important parameters for the subsequent filtering process. Based on sparsified updates a layer-wise adaptive filter then adaptively selects benign layers using both magnitude and direction metrics across all clients for aggregation. We provide a detailed theoretical robustness analysis of LASA and a resilience analysis of the FL integrated with LASA. Extensive experiments are conducted on various IID and non-IID datasets. The numerical results demonstrate the effectiveness of LASA. Code is available at https://github.com/JiiahaoXU/LASA.
Related Material