LLM-Seg: Bridging Image Segmentation and Large Language Model Reasoning

Junchi Wang, Lei Ke; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 1765-1774

Abstract


Understanding human instructions to identify the target objects is vital for perception systems. In recent years the advancements of Large Language Models (LLMs) have introduced new possibilities for image segmentation. In this work we delve into reasoning segmentation a novel task that enables segmentation system to reason and interpret implicit user intention via large language model reasoning and then segment the corresponding target. Our work on reasoning segmentation contributes on both the methodological design and dataset labeling. For the model we propose a new framework named LLM-Seg. LLM-Seg effectively connects the current foundational Segmentation Anything Model and the LLM by mask proposals selection. For the dataset we propose an automatic data generation pipeline and construct a new reasoning segmentation dataset named LLM-Seg40K. Experiments demonstrate that our LLM-Seg exhibits competitive performance compared with existing methods. Furthermore our proposed pipeline can efficiently produce high-quality reasoning segmentation datasets. The LLM-Seg40K dataset developed through this pipeline serves as a new benchmark for training and evaluating various reasoning segmentation approaches. Our code models and dataset are at https://github.com/wangjunchi/LLMSeg.

Related Material


[pdf]
[bibtex]
@InProceedings{Wang_2024_CVPR, author = {Wang, Junchi and Ke, Lei}, title = {LLM-Seg: Bridging Image Segmentation and Large Language Model Reasoning}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {1765-1774} }