-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Hajimiri_2025_WACV, author = {Hajimiri, Sina and Ben Ayed, Ismail and Dolz, Jose}, title = {Pay Attention to Your Neighbours: Training-Free Open-Vocabulary Semantic Segmentation}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {5061-5071} }
Pay Attention to Your Neighbours: Training-Free Open-Vocabulary Semantic Segmentation
Abstract
Despite the significant progress in deep learning for dense visual recognition problems such as semantic segmentation traditional methods are constrained by fixed class sets. Meanwhile vision-language foundation models such as CLIP have showcased remarkable effectiveness in numerous zero-shot image-level tasks owing to their robust generalizability. Recently a body of work has investigated utilizing these models in open-vocabulary semantic segmentation (OVSS). However existing approaches often rely on impractical supervised pre-training or access to additional pre-trained networks. In this work we propose a strong baseline for training-free OVSS termed Neighbour-Aware CLIP (NACLIP) representing a straightforward adaptation of CLIP tailored for this scenario. Our method enforces localization of patches in the self-attention of CLIP's vision transformer which despite being crucial for dense prediction tasks has been overlooked in the OVSS literature. By incorporating design choices favouring segmentation our approach significantly improves performance without requiring additional data auxiliary pre-trained networks or extensive hyperparameter tuning making it highly practical for real-world applications. Experiments are performed on 8 popular semantic segmentation benchmarks yielding state-of-the-art performance on most scenarios. Our code is publicly available at https://github.com/sinahmr/NACLIP.
Related Material