Adapting the Segment Anything Model During Usage in Novel Situations

Robin Schön, Julian Lorenz, Katja Ludwig, Rainer Lienhart; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 3616-3626

Abstract


The interactive segmentation task consists in the creation of object segmentation masks based on user interactions. The most common way to guide a model towards producing a correct segmentation consists in clicks on the object and background. The recently published Segment Anything Model (SAM) supports a generalized version of the interactive segmentation problem and has been trained on an object segmentation dataset which contains 1.1B masks. Though being trained extensively and with the explicit purpose of serving as a foundation model we show significant limitations of SAM when being applied for interactive segmentation on novel domains or object types. On the used datasets SAM displays a failure rate FR30@90 of up to 72.6 %. Since we still want such foundation models to be immediately applicable we present a framework that can adapt SAM during immediate usage. For this we will leverage the user interactions and masks which are constructed during the interactive segmentation process. We use this information to generate pseudo-labels which we use to compute a loss function and optimize a part of the SAM model. The presented method causes a relative reduction of up to 48.1 % in the FR20@85 and 46.6 % in the FR30@90 metrics.

Related Material


[pdf]
[bibtex]
@InProceedings{Schon_2024_CVPR, author = {Sch\"on, Robin and Lorenz, Julian and Ludwig, Katja and Lienhart, Rainer}, title = {Adapting the Segment Anything Model During Usage in Novel Situations}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {3616-3626} }