-
[pdf]
[bibtex]@InProceedings{Bao_2025_CVPR, author = {Bao, Yuanfei and Lu, Xin and Wang, Xingbo and Yang, Jiarong and Hu, Anya and Wang, Kunyu and Xiao, Jie and Li, Dong and Fu, Xueyang and Zha, Zheng-Jun}, title = {Frequency-Prior Enhanced Ambient Lighting Normalization via Visual Perceptual Refinement}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2025}, pages = {1216-1226} }
Frequency-Prior Enhanced Ambient Lighting Normalization via Visual Perceptual Refinement
Abstract
The Ambient Lighting Normalization task aims to recover the detailed information lost in the image due to uneven lighting. Existing methods primarily focus on image reconstruction through pixel-level loss recovery and structural similarity constraints, often neglecting the visual perceptual quality of the restored image. To address this, we propose a novel two-stage optimization framework. In the first stage, a frequency-domain prior-enhanced model is employed to improve the understanding of various lighting conditions, optimizing the overall image brightness distribution while ensuring shadow removal and structural integrity. The second stage builds upon restoration accuracy by utilizing a lightweight and efficient restoration network for secondary optimization, ensuring that the restored result aligns more closely with human perception. Additionally, we strategically design loss function combination strategies at different stages to fully leverage the advantages of each model. Building upon the synergistic effects of the proposed two-stage optimization framework and training strategies, our approach achieves significant improvements in both objective accuracy and perceptual quality. Our approach won the championship in the NTIRE 2025 Ambient Lighting Normalization Challenge.
Related Material