-
[pdf]
[bibtex]@InProceedings{Sikdar_2025_CVPR, author = {Sikdar, Aniruddh and Kishor, Arya and Kadam, Ishika and Sundaram, Suresh}, title = {PiCaZo: Pixel-Aligned Contrastive Learning for Zero-Shot Domain Adaptation}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops}, month = {June}, year = {2025}, pages = {6534-6544} }
PiCaZo: Pixel-Aligned Contrastive Learning for Zero-Shot Domain Adaptation
Abstract
Domain adaptation has been extensively studied in computer vision but typically requires access to target images during training, which may not always be practical. Zero-shot domain adaptation overcomes this limitation by adapting CLIP-based model trained on a source domain using only a natural language description of the target domain, i.e., a prompt, eliminating the need for target domain images. Existing approaches depend on domain IDs and domain-specific models, requiring prior knowledge to choose the correct model and train separate models for each target domain using CLIP text embeddings, limiting their flexibility. To address these limitations, we propose PiCaZo, a novel Pixel-aligned Contrastive Learning Approach for Zero-shot Adaptation. To capture multi-perspective representations in the style of potential target domains and incorporate local contextual cues across diverse domains, we introduce VOLT-PIN alongside Contrastive NightVision Learning. During model adaptation, Contrastive Domain Alignment (CDA) ensures alignment of text-rectified rain and fog features, while Fog-Adaptive Contrastive Training (FACT) improves decoder adaptation. These components operate in synergy to refine the simulation process, ensuring better alignment between simulated features and target text for more effective adaptation. Extensive semantic segmentation experiments show that PiCaZo surpasses the baseline by an average of 3.6% across three benchmark datasets, all while preserving inference-time model complexity. It also achieves superior performance compared to state-of-the-art methods.
Related Material