Deep Video Inverse Tone Mapping Based on Temporal Clues

Yuyao Ye, Ning Zhang, Yang Zhao, Hongbin Cao, Ronggang Wang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 25995-26004

Abstract


Inverse tone mapping (ITM) aims to reconstruct high dynamic range (HDR) radiance from low dynamic range (LDR) content. Although many deep image ITM methods can generate impressive results the field of video ITM is still to be explored. Processing video sequences by image ITM methods may cause temporal inconsistency. Besides they aren't able to exploit the potentially useful information in the temporal domain. In this paper we analyze the process of video filming and then propose a Global Sample and Local Propagate strategy to better find and utilize temporal clues. To better realize the proposed strategy we design a two-stage pipeline which includes modules named Incremental Clue Aggregation Module and Feature and Clue Propagation Module. They can align and fuse frames effectively under the condition of brightness changes and propagate features and temporal clues to all frames efficiently. Our temporal clues based video ITM method can recover realistic and temporal consistent results with high fidelity in over-exposed regions. Qualitative and quantitative experiments on public datasets show that the proposed method has significant advantages over existing methods.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Ye_2024_CVPR, author = {Ye, Yuyao and Zhang, Ning and Zhao, Yang and Cao, Hongbin and Wang, Ronggang}, title = {Deep Video Inverse Tone Mapping Based on Temporal Clues}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {25995-26004} }