Segment Anything Model for Road Network Graph Extraction

Congrui Hetang, Haoru Xue, Cindy Le, Tianwei Yue, Wenping Wang, Yihui He; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 2556-2566

Abstract


We propose SAM-Road an adaptation of the Segment Anything Model (SAM) for extracting large-scale vectorized road network graphs from satellite imagery. To predict graph geometry we formulate it as a dense semantic segmentation task leveraging the inherent strengths of SAM. The image encoder of SAM is fine-tuned to produce probability masks for roads and intersections from which the graph vertices are extracted via simple non-maximum suppression. To predict graph topology we designed a lightweight transformer-based graph neural network which leverages the SAM image embeddings to estimate the edge existence probabilities between vertices. Our approach directly predicts the graph vertices and edges for large regions without expensive and complex post-processing heuristics and is capable of building complete road network graphs spanning multiple square kilometers in a matter of seconds. With its simple straightforward and minimalist design SAM-Road achieves comparable accuracy with the state-of-the-art method RNGDet++ while being 40 times faster on the City-scale dataset. We thus demonstrate the power of a foundational vision model when applied to a graph learning task. The code is available at https://github.com/htcr/sam_road.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Hetang_2024_CVPR, author = {Hetang, Congrui and Xue, Haoru and Le, Cindy and Yue, Tianwei and Wang, Wenping and He, Yihui}, title = {Segment Anything Model for Road Network Graph Extraction}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {2556-2566} }