HDR Reconstruction Boosting with Training-Free and Exposure-Consistent Diffusion

Yo-Tin Lin, Su-Kai Chen, Hou-Ning Hu, Yen-Yu Lin, Yu-Lun Liu; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2026, pp. 7513-7523

Abstract


Single LDR to HDR reconstruction remains challenging for over-exposed regions where traditional methods often fail due to complete information loss. We present a training-free approach that enhances existing indirect HDR reconstruction methods through diffusion-based inpainting. Our method combines text-guided diffusion models with SDEdit refinement to generate plausible content in over-exposed areas while maintaining consistency across multi-exposure LDR images. Unlike previous approaches requiring extensive training, our method seamlessly integrates with existing indirect HDR reconstruction techniques through an iterative compensation mechanism that ensures luminance coherence across multiple exposures. We demonstrate significant improvements in both perceptual quality and quantitative metrics on standard HDR datasets and in-the-wild captures. Results show that our method effectively recovers natural details in challenging scenarios while preserving the advantages of existing HDR reconstruction pipelines.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Lin_2026_WACV, author = {Lin, Yo-Tin and Chen, Su-Kai and Hu, Hou-Ning and Lin, Yen-Yu and Liu, Yu-Lun}, title = {HDR Reconstruction Boosting with Training-Free and Exposure-Consistent Diffusion}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {March}, year = {2026}, pages = {7513-7523} }