Unsupervised Nuclei Segmentation by Improving Pseudo Labels from Segment Anything Model

Ryota Nakai, Kazuhiro Hotta; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2025, pp. 1025-1033

Abstract


Creating annotations for cell image segmentation requires significant manual effort and cost. To address this, we explore the use of Segment Anything Model (SAM) to generate pseudo labels without human supervision. However, SAM to biomedical images such as cell nuclei, which are not included in its pretraining data, often leads to issues such as missing nuclei or erroneously segmenting non-nuclei regions. In this paper, we propose a fully unsupervised method using three U-Net models to refine SAM-generated pseudo labels. The first U-Net is trained using SAM outputs as supervision. The second U-Net is then trained using pseudo labels generated by taking the logical OR of the outputs from SAM and the first U-Net. This step aims to recover missing nuclei and improve feature extraction of nuclei to increase the number of nuclei that can be detected. However, when we use the logical OR of the outputs from SAM, U-Net1, and U-Net2 as the third pseudo label, it leads to over-segmentation. To overcome this issue, we introduce a majority voting scheme among the three outputs to construct a more accurate pseudo label. Finally, the third U-Net is trained using this majority-vote-refined pseudo label, which further improves the quality of nuclei segmentation. Our method achieved segmentation performance comparable to fully supervised training without using any ground truth annotations.

Related Material


[pdf]
[bibtex]
@InProceedings{Nakai_2025_ICCV, author = {Nakai, Ryota and Hotta, Kazuhiro}, title = {Unsupervised Nuclei Segmentation by Improving Pseudo Labels from Segment Anything Model}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2025}, pages = {1025-1033} }