Versatile Medical Image Segmentation Learned from Multi-Source Datasets via Model Self-Disambiguation

Xiaoyang Chen, Hao Zheng, Yuemeng Li, Yuncong Ma, Liang Ma, Hongming Li, Yong Fan; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 11747-11756

Abstract


A versatile medical image segmentation model applicable to images acquired with diverse equipment and protocols can facilitate model deployment and maintenance. However building such a model typically demands a large diverse and fully annotated dataset which is challenging to obtain due to the labor-intensive nature of data curation. To address this challenge we propose a cost-effective alternative that harnesses multi-source data with only partial or sparse segmentation labels for training substantially reducing the cost of developing a versatile model. We devise strategies for model self-disambiguation prior knowledge incorporation and imbalance mitigation to tackle challenges associated with inconsistently labeled multi-source data including label ambiguity and modality dataset and class imbalances. Experimental results on a multi-modal dataset compiled from eight different sources for abdominal structure segmentation have demonstrated the effectiveness and superior performance of our method compared to state-of-the-art alternative approaches. We anticipate that its cost-saving features which optimize the utilization of existing annotated data and reduce annotation efforts for new data will have a significant impact in the field.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Chen_2024_CVPR, author = {Chen, Xiaoyang and Zheng, Hao and Li, Yuemeng and Ma, Yuncong and Ma, Liang and Li, Hongming and Fan, Yong}, title = {Versatile Medical Image Segmentation Learned from Multi-Source Datasets via Model Self-Disambiguation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {11747-11756} }