What Makes Transfer Learning Work for Medical Images: Feature Reuse & Other Factors

Christos Matsoukas, Johan Fredin Haslum, Moein Sorkhei, Magnus Söderberg, Kevin Smith; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 9225-9234

Abstract


Transfer learning is a standard technique to transfer knowledge from one domain to another. For applications in medical imaging, transfer from ImageNet has become the de-facto approach, despite differences in the tasks and image characteristics between the domains. However, it is unclear what factors determine whether - and to what extent - transfer learning to the medical domain is useful. The long-standing assumption that features from the source domain get reused has recently been called into question. Through a series of experiments on several medical image benchmark datasets, we explore the relationship between transfer learning, data size, the capacity and inductive bias of the model, as well as the distance between the source and target domain. Our findings suggest that transfer learning is beneficial in most cases, and we characterize the important role feature reuse plays in its success.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Matsoukas_2022_CVPR, author = {Matsoukas, Christos and Haslum, Johan Fredin and Sorkhei, Moein and S\"oderberg, Magnus and Smith, Kevin}, title = {What Makes Transfer Learning Work for Medical Images: Feature Reuse \& Other Factors}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {9225-9234} }