How SAM Perceives Different mp-MRI Brain Tumor Domains?

Cecilia Diana-Albelda, Roberto Alcover-Couso, Álvaro García-Martín, Jesus Bescos; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 4959-4970

Abstract


Gliomas among the deadliest forms of cancer are brain tumors that present a significant challenge due to their rapid progression and resistance to treatment. Effective and early diagnosis is critical for improving patient prognosis. Deep learning particularly through large-scale vision models like Segment Anything Model (SAM) offers a new pathway for tumor segmentation. This study seeks to address the primary challenge of adapting SAM for mp-MRI brain scans which typically encompass multiple imaging modalities not fully utilized by standard three-channel vision models. We demonstrate that leveraging all available MRI modalities achieves superior performance compared to the standard mechanism of repeating a MRI scan to fit the input embedding. Our research also focuses on parameter-efficient tuning of SAM to effectively train the model while minimizing resource usage showcasing significant improvements when evaluated across multiple datasets. Finally we expose how SAM perceives differences across varied brain tumor domains by visually analyzing the features extracted on each of them. Our code and models are available at https://github.com/vpulab/med-sam-brain.

Related Material


[pdf]
[bibtex]
@InProceedings{Diana-Albelda_2024_CVPR, author = {Diana-Albelda, Cecilia and Alcover-Couso, Roberto and Garc{\'\i}a-Mart{\'\i}n, \'Alvaro and Bescos, Jesus}, title = {How SAM Perceives Different mp-MRI Brain Tumor Domains?}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {4959-4970} }