Improving the Generalization of Segmentation Foundation Model under Distribution Shift via Weakly Supervised Adaptation

Haojie Zhang, Yongyi Su, Xun Xu, Kui Jia; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 23385-23395

Abstract


The success of large language models has inspired the computer vision community to explore image segmentation foundation model that is able to zero/few-shot generalize through prompt engineering. Segment-Anything (SAM) among others is the state-of-the-art image segmentation foundation model demonstrating strong zero/few-shot generalization. Despite the success recent studies reveal the weakness of SAM under strong distribution shift. In particular SAM performs awkwardly on corrupted natural images camouflaged images medical images etc. Motivated by the observations we aim to develop a self-training based strategy to adapt SAM to target distribution. Given the unique challenges of large source dataset high computation cost and incorrect pseudo label we propose a weakly supervised self-training architecture with anchor regularization and low-rank finetuning to improve the robustness and computation efficiency of adaptation. We validate the effectiveness on 5 types of downstream segmentation tasks including natural clean/corrupted images medical images camouflaged images and robotic images. Our proposed method is task-agnostic in nature and outperforms pre-trained SAM and state-of-the-art domain adaptation methods on almost all downstream tasks with the same testing prompt inputs.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Zhang_2024_CVPR, author = {Zhang, Haojie and Su, Yongyi and Xu, Xun and Jia, Kui}, title = {Improving the Generalization of Segmentation Foundation Model under Distribution Shift via Weakly Supervised Adaptation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {23385-23395} }