Point-Supervised Semantic Segmentation of Natural Scenes via Hyperspectral Imaging

Tianqi Ren, Qiu Shen, Ying Fu, Shaodi You; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 1357-1367

Abstract


Natural scene semantic segmentation is an important task in computer vision. While training accurate models for semantic segmentation relies heavily on detailed and accurate pixel-level annotations which are hard and time-consuming to be collected especially for complicated natural scenes. Weakly-supervised methods can reduce labeling cost greatly at the expense of significant performance degradation. In this paper we explore the possibility of introducing hyperspectral imaging to improve the performance of weakly-supervised semantic segmentation. Specifically we take two challenging hyperspectral datasets of outdoor natural scenes as example and randomly label dozens of points with semantic categories to conduct a point-supervised semantic segmentation benchmark. Then we propose a spectral and spatial fusion method to generate detailed pixel-level annotations which are used to supervise the semantic segmentation models. With multiple experiments we find that hyperspectral information can be greatly helpful to point-supervised semantic segmentation as it is more distinctive than RGB. As a result our proposed method with only point-supervision can achieve approximate 90% performance of the fully-supervised method in many cases.

Related Material


[pdf]
[bibtex]
@InProceedings{Ren_2024_CVPR, author = {Ren, Tianqi and Shen, Qiu and Fu, Ying and You, Shaodi}, title = {Point-Supervised Semantic Segmentation of Natural Scenes via Hyperspectral Imaging}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {1357-1367} }