Boosting Semantic Segmentation from the Perspective of Explicit Class Embeddings

Yuhe Liu, Chuanjian Liu, Kai Han, Quan Tang, Zengchang Qin; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 821-831

Abstract


Semantic segmentation is a computer vision task that associates a label with each pixel in an image. Modern approaches tend to introduce class embeddings into semantic segmentation for deeply utilizing category semantics, and regard supervised class masks as final predictions. In this paper, we explore the mechanism of class embeddings and have an insight that more explicit and meaningful class embeddings can be generated based on class masks purposely. Following this observation, we propose ECENet, a new segmentation paradigm, in which class embeddings are obtained and enhanced explicitly during interacting with multi-stage image features. Based on this, we revisit the traditional decoding process and explore inverted information flow between segmentation masks and class embeddings. Furthermore, to ensure the discriminability and informativity of features from backbone, we propose a Feature Reconstruction module, which combines intrinsic and diverse branches together to ensure the concurrence of diversity and redundancy in features. Experiments show that our ECENet outperforms its counterparts on the ADE20K dataset with much less computational cost and achieves new state-of-the-art results on PASCAL-Context dataset. The code will be released at https://gitee.com/mindspore/models and https://github.com/Carol-lyh/ECENet.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Liu_2023_ICCV, author = {Liu, Yuhe and Liu, Chuanjian and Han, Kai and Tang, Quan and Qin, Zengchang}, title = {Boosting Semantic Segmentation from the Perspective of Explicit Class Embeddings}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {821-831} }