Domain Generalization using Large Pretrained Models with Mixture-of-Adapters

Gyuseong Lee, Wooseok Jang, Jinhyeon Kim, Jaewoo Jung, Seungryong Kim; Proceedings of the Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 8248-8258

Abstract


Learning robust vision models that perform well in out-of-distribution (OOD) situations is an important task for model deployment in real-world settings. Despite extensive research in this field many proposed methods have only shown minor performance improvements compared to the simplest empirical risk minimization (ERM) approach which was evaluated on a benchmark with a limited hyperparameter search space. Our focus in this study is on leveraging the knowledge of large pretrained models to improve handling of OOD scenarios and tackle domain generalization problems. However prior research has revealed that naively fine-tuning a large pretrained model can impair OOD robustness. Thus we employ parameter-efficient fine-tuning (PEFT) techniques to effectively preserve OOD robustness while working with large models. Our extensive experiments and analysis confirm that the most effective approaches involve ensembling diverse models and increasing the scale of pretraining. As a result we achieve state-of-the-art performance in domain generalization tasks. Our code and project page are available at: https://cvlab-kaist.github.io/MoA

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Lee_2025_WACV, author = {Lee, Gyuseong and Jang, Wooseok and Kim, Jinhyeon and Jung, Jaewoo and Kim, Seungryong}, title = {Domain Generalization using Large Pretrained Models with Mixture-of-Adapters}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {8248-8258} }