-
[pdf]
[supp]
[bibtex]@InProceedings{Jain_2025_WACV, author = {Jain, Pallavi and Ienco, Dino and Interdonato, Roberto and Berchoux, Tristan and Marcos, Diego}, title = {SenCLIP: Enhancing Zero-Shot Land-Use Mapping for Sentinel-2 with Ground-Level Prompting}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {5656-5665} }
SenCLIP: Enhancing Zero-Shot Land-Use Mapping for Sentinel-2 with Ground-Level Prompting
Abstract
Pre-trained vision-language models (VLMs) such as CLIP demonstrate impressive zero-shot classification capabilities with free-form prompts and even show some generalization in specialized domains. However their performance on satellite imagery is limited due to the under representation of such data in their training sets which predominantly consist of ground-level images. Existing prompting techniques for satellite imagery are often restricted to generic phrases like "a satellite image of..." limiting their effectiveness for zero-shot land-use/land-cover (LULC) mapping. To address these challenges we introduce SenCLIP which transfers CLIP's representation to Sentinel-2 imagery by leveraging a large dataset of Sentinel-2 images paired with geotagged ground-level photos from across Europe. We evaluate SenCLIP alongside other state-of-the-art remote sensing VLMs on zero-shot LULC mapping tasks using the EuroSAT and BigEarthNet datasets with both aerial and ground-level prompting styles. Our approach which aligns ground-level representations with satellite imagery demonstrates significant improvements in classification accuracy across both prompt styles opening new possibilities for applying free-form textual descriptions in zero-shot LULC mapping. Code dataset and pretrained models are available at https://github.com/pallavijain-pj/SenCLIP
Related Material