Sparse Object-Level Supervision for Instance Segmentation With Pixel Embeddings

Adrian Wolny, Qin Yu, Constantin Pape, Anna Kreshuk; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 4402-4411

Abstract


Most state-of-the-art instance segmentation methods have to be trained on densely annotated images. While difficult in general, this requirement is especially daunting for biomedical images, where domain expertise is often required for annotation and no large public data collections are available for pre-training. We propose to address the dense annotation bottleneck by introducing a proposal-free segmentation approach based on non-spatial embeddings, which exploits the structure of the learned embedding space to extract individual instances in a differentiable way. The segmentation loss can then be applied directly to instances and the overall pipeline can be trained in a fully- or weakly supervised manner. We consider the challenging case of positive-unlabeled supervision, where a novel self-supervised consistency loss is introduced for the unlabeled parts of the training data. We evaluate the proposed method on 2D and 3D segmentation problems in different microscopy modalities as well as on the Cityscapes and CVPPP instance segmentation benchmarks, achieving state-of-the-art results on the latter.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Wolny_2022_CVPR, author = {Wolny, Adrian and Yu, Qin and Pape, Constantin and Kreshuk, Anna}, title = {Sparse Object-Level Supervision for Instance Segmentation With Pixel Embeddings}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {4402-4411} }