Vision-Aware Text Features in Referring Image Segmentation: From Object Understanding to Context Understanding

Hai Nguyen-Truong, E-Ro Nguyen, Tuan-Anh Vu, Minh-Triet Tran, Binh-Son Hua, Sai-Kit Yeung; Proceedings of the Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 4988-4998

Abstract


Referring image segmentation is a challenging task that involves generating pixel-wise segmentation masks based on natural language descriptions. The complexity of this task increases with the intricacy of the sentences provided. Existing methods have relied mostly on visual features to generate the segmentation masks while treating text features as supporting components. However this under-utilization of text understanding limits the model's capability to fully comprehend the given expressions. In this work we propose a novel framework that specifically emphasizes object and context comprehension inspired by human cognitive processes through Vision-Aware Text Features. Firstly we introduce a CLIP Prior module to localize the main object of interest and embed the object heatmap into the query initialization process. Secondly we propose a combination of two components: Contextual Multimodal Decoder and Meaning Consistency Constraint to further enhance the coherent and consistent interpretation of language cues with the contextual understanding obtained from the image. Our method achieves significant performance improvements on three benchmark datasets RefCOCO RefCOCO+ and G-Ref. Project page: https://vatex.hkustvgd.com.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Nguyen-Truong_2025_WACV, author = {Nguyen-Truong, Hai and Nguyen, E-Ro and Vu, Tuan-Anh and Tran, Minh-Triet and Hua, Binh-Son and Yeung, Sai-Kit}, title = {Vision-Aware Text Features in Referring Image Segmentation: From Object Understanding to Context Understanding}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {4988-4998} }