MaskSplit: Self-Supervised Meta-Learning for Few-Shot Semantic Segmentation

Mustafa Sercan Amac, Ahmet Sencan, Bugra Baran, Nazli Ikizler-Cinbis, Ramazan Gokberk Cinbis; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2022, pp. 1067-1077

Abstract


Just like other few-shot learning problems, few-shot segmentation aims to minimize the need for manual annotation, which is particularly costly in segmentation tasks. Even though the few-shot setting reduces this cost for novel test classes, there is still a need to annotate the training data. To alleviate this need, we propose a self-supervised training approach for learning few-shot segmentation models. We first use unsupervised saliency estimation to obtain pseudo-masks on images. We then train a simple prototype based model over different splits of pseudo masks and augmentations of images. Our extensive experiments show that the proposed approach achieves promising results, highlighting the potential of self-supervised training. To the best of our knowledge this is the first work that addresses unsupervised few-shot segmentation problem on natural images.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Amac_2022_WACV, author = {Amac, Mustafa Sercan and Sencan, Ahmet and Baran, Bugra and Ikizler-Cinbis, Nazli and Cinbis, Ramazan Gokberk}, title = {MaskSplit: Self-Supervised Meta-Learning for Few-Shot Semantic Segmentation}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2022}, pages = {1067-1077} }