On the Robustness Tradeoff in Fine-Tuning

Kunyang Li, Jean-Charles Noirot Ferrand, Ryan Sheatsley, Blaine Hoak, Yohan Beugin, Eric Pauley, Patrick McDaniel; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025, pp. 4898-4907

Abstract


Fine-tuning has become the standard practice for adapting pre-trained models to downstream tasks. However, the impact on model robustness is not well understood. In this work, we characterize the robustness-accuracy trade-off in fine-tuning. We evaluate the robustness and accuracy of fine-tuned models over 6 benchmark datasets and 7 different fine-tuning strategies. We observe a consistent trade-off between adversarial robustness and accuracy. Peripheral updates such as BitFit are more effective for simple tasks---over 75% above the average measured by the area under the Pareto frontiers on CIFAR-10 and CIFAR-100. In contrast, fine-tuning information-heavy layers, such as attention layers via Compacter, achieves a better Pareto frontier on more complex tasks---57.5% and 34.6% above the average on Caltech-256 and CUB-200, respectively. Lastly, we observe that the robustness of fine-tuning against out-of-distribution data closely tracks accuracy. These insights emphasize the need for robustness-aware fine-tuning to ensure reliable real-world deployments.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Li_2025_ICCV, author = {Li, Kunyang and Ferrand, Jean-Charles Noirot and Sheatsley, Ryan and Hoak, Blaine and Beugin, Yohan and Pauley, Eric and McDaniel, Patrick}, title = {On the Robustness Tradeoff in Fine-Tuning}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {4898-4907} }