A Pixel-Level Meta-Learner for Weakly Supervised Few-Shot Semantic Segmentation

Yuan-Hao Lee, Fu-En Yang, Yu-Chiang Frank Wang; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2022, pp. 2170-2180

Abstract


Few-shot semantic segmentation addresses the learning task in which only few images with ground truth pixel-level labels are available for the novel classes of interest. One is typically required to collect a large mount of data (i.e., base classes) with such ground truth information, followed by meta-learning strategies to address the above learning task. When only image-level semantic labels can be observed during both training and testing, it is considered as an even more challenging task of weakly supervised few-shot semantic segmentation. To address this problem, we propose a novel meta-learning framework, which predicts pseudo pixel-level segmentation masks from a limited amount of data and their semantic labels. More importantly, our learning scheme further exploits the produced pixel-level information for query image inputs with segmentation guarantees. Thus, our proposed learning model can be viewed as a pixel-level meta-learner. Through extensive experiments on benchmark datasets, we show that our model achieves satisfactory performances under fully supervised settings, yet performs favorably against state-of-the-art methods under weakly supervised settings.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Lee_2022_WACV, author = {Lee, Yuan-Hao and Yang, Fu-En and Wang, Yu-Chiang Frank}, title = {A Pixel-Level Meta-Learner for Weakly Supervised Few-Shot Semantic Segmentation}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2022}, pages = {2170-2180} }