FedDG-MoE: Test-Time Mixture-of-Experts Fusion for Federated Domain Generalization

Ahmed Radwan, Mahmoud Soliman, Omar Abdelaziz, Mohamed Shehata; Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops, 2025, pp. 1811-1820

Abstract


Federated Domain Generalization (FDG) aims to learn a model from multiple distributed source domains that generalizes well to unseen target domains. While transformers have achieved remarkable success in various computer vision tasks, their application to FDG is hindered by their large size, leading to high communication costs and demanding substantial data for effective fine-tuning. Inspired by the recent progress in parameter-efficient fine-tuning, we propose a novel approach for FDG that leverages Mixture of Experts (MoE) within a federated learning framework. Specifically, we employ a frozen, pre-trained vision transformer (ViT) as a backbone and introduce trainable MoE adapters based on Kronecker products at each client. This allows us to train only a small fraction of parameters, significantly reducing both computational and communication overhead. Furthermore, the MoE architecture promotes diverse feature learning, which is crucial for generalization to unseen domains. Crucially, during inference, we dynamically combine the client-specific MoE adapters based on a novel test-time weighting scheme. This weighting is determined by calculating the cosine similarity between the feature statistics of a given test batch and those tracked at each client during training. We demonstrate empirically that our approach, outperforms existing methods that rely on extensive fine-tuning of large pre-trained models, highlighting the efficacy of parameter-efficient fine-tuning and test-time adaptation for FDG. Our results highlight the crucial role of pre-trained features and the advantages of using MoE adapters in the context of federated domain generalization. The code is available at: https://github.com/AhmedMostafaSoliman/FedDG-MoE/

Related Material


[pdf]
[bibtex]
@InProceedings{Radwan_2025_CVPR, author = {Radwan, Ahmed and Soliman, Mahmoud and Abdelaziz, Omar and Shehata, Mohamed}, title = {FedDG-MoE: Test-Time Mixture-of-Experts Fusion for Federated Domain Generalization}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops}, month = {June}, year = {2025}, pages = {1811-1820} }