AdaDCP: Learning an Adapter with Discrete Cosine Prior for Clear-to-Adverse Domain Generalization

Qi Bi, Yixian Shen, Jingjun Yi, Gui-Song Xia; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025, pp. 12997-13008

Abstract


Vision Foundation Model (VFM) provides an inherent generalization ability to unseen domains for downstream tasks. However, fine-tuning VFM to parse various adverse scenes (e.g., fog, snow, night) is particularly challenging, as these samples are difficult to collect. Using easy-to-acquire clear scenes as the source domain is a feasible solution, but a huge domain gap exists between clear and adverse scenes due to their dramatically different appearances. To address this challenge, this paper proposes AdaDCP, a VFM adapter with discrete cosine prior. The innovation originates from the observation that, the frequency components from a VFM exhibit either variant or invariant properties on adverse weather conditions after discrete cosine transform. Technically, the weather-invariant property learning preceives most of the scene content that is invariant to the adverse condition. The weather-variant property learning, in contrast, perceives the weather-specific information from different types of adverse conditions. Finally, the weather-invariant property alignment implicitly enforces the weather-variant components to incorporate the weather-invariant information, therefore mitigating the clear-to-adverse domain gap. Experiments conducted on eight unseen adverse scene segmentation datasets show its state-of-the-art performance.

Related Material


[pdf]
[bibtex]
@InProceedings{Bi_2025_ICCV, author = {Bi, Qi and Shen, Yixian and Yi, Jingjun and Xia, Gui-Song}, title = {AdaDCP: Learning an Adapter with Discrete Cosine Prior for Clear-to-Adverse Domain Generalization}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {12997-13008} }