-
[pdf]
[arXiv]
[bibtex]@InProceedings{Sorkhei_2025_ICCV, author = {Sorkhei, Moein and Konuk, Emir and Guo, Jingyu and Meng, Chanjuan and Matsoukas, Christos and Smith, Kevin}, title = {Efficient Self-Supervised Adaptation for Medical Image Analysis}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2025}, pages = {884-891} }
Efficient Self-Supervised Adaptation for Medical Image Analysis
Abstract
Self-supervised adaptation (SSA) improves foundation model transfer to medical domains but is computationally prohibitive. Although parameter efficient fine-tuning (PEFT) methods such as LoRA have been explored for supervised adaptation, their effectiveness for SSA remains unknown. In this work, we introduce efficient self-supervised adaptation (ESSA), a framework that applies parameter-efficient fine-tuning techniques to SSA with the aim of reducing computational cost and improving adaptation performance. To the best of our knowledge, we are the first to demonstrate that PEFT methods can be effectively applied to SSA to improve self-supervised learning, challenging the assumption that full-parameter SSA is necessary for optimal performance. Furthermore, we show that applying PEFT during supervised adaptation following self-supervision leads to additional performance gains, outperforming full-parameter training.
Related Material
