Tensor-aggregated LoRA in Federated Fine-tuning

Zhixuan Li, Binqian Xu, Xiangbo Shu, Jiachao Zhang, Yazhou Yao, Guo-Sen Xie, Jinhui Tang; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025, pp. 1058-1067

Abstract


The combination of Large Language Models (LLMs) and Federated Learning (FL) to leverage privacy-preserving data has emerged as a promising approach to further enhance the Parameter-Efficient Fine-Tuning (PEFT) capabilities of LLMs. In real-world FL settings with resource heterogeneity, the training process of Low-Rank Adaptation (LoRA), the representative PEFT method, still faces two major challenges: aggregation noise and aggregation misalignment. In this paper, we propose a novel Tensor-aggregated LoRA (Te-LoRA) in Federated Fine-tuning based on an alternating-freeze training strategy to avoid aggregating noise without additional server-side computational costs, while mitigating aggregation suboptimality caused by parameter misalignment between heterogeneous LoRAs. Especially in addressing the aggregation suboptimality issue, we design the Pre-Aggregation Alignment strategy (PAA-strategy) and Tensor-to-Matrix strategy (T2M-strategy) for aligning heterogeneous LoRAs and aggregating them into an united tensor, which is then decomposed into matrices adapted for client download. Extensive experiments demonstrate the effectiveness and robustness of Te-LoRA in both homogeneous and heterogeneous settings.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Li_2025_ICCV, author = {Li, Zhixuan and Xu, Binqian and Shu, Xiangbo and Zhang, Jiachao and Yao, Yazhou and Xie, Guo-Sen and Tang, Jinhui}, title = {Tensor-aggregated LoRA in Federated Fine-tuning}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {1058-1067} }