-
[pdf]
[bibtex]@InProceedings{Kumar_2025_WACV, author = {Kumar, Rohit and Sharma, Tanishq and Vaghela, Vedanshi and Jha, Sanjeev K. and Agarwal, Akshay}, title = {PrecipFormer: Efficient Transformer for Precipitation Downscaling}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV) Workshops}, month = {February}, year = {2025}, pages = {489-497} }
PrecipFormer: Efficient Transformer for Precipitation Downscaling
Abstract
Precipitation downscaling which enhances the spatial resolution of gridded precipitation data remains a critical challenge in climate modeling and hydrological applications. While Vision Transformers (ViTs) have demonstrated remarkable success in various computer vision tasks through their ability to capture long-range dependencies their application to precipitation downscaling remains largely unexplored due to computational constraints and the challenge of effectively modeling both local and global precipitation patterns. This paper introduces PrecipFormer a computationally efficient transformer architecture specifically designed for precipitation downscaling. Our model builds upon the Low-to-High Multi-Level Vision Transformer (LMLT) mechanism enabling parallel processing of features at multiple spatial scales while significantly reducing computational overhead. We enhance the architecture with a Convolutional Block Attention Module (CBAM) in the shallow feature extractor to adaptively focus on critical spatial regions. Through extensive experiments we demonstrate that the proposed PrecipFormer achieves superior performance compared to state-of-the-art baselines.
Related Material