Gain-first or Exposure-first: Benchmark for Better Low-light Video Photography and Enhancement

Haiyang Jiang, Zhihang Zhong, Yinqiang Zheng; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 1345-1356

Abstract


Acquiring visually pleasing videos under insufficient lighting has been an important and challenging task for both photographers and algorithm engineers. Current methods have evolved into two major paradigms: prioritizing camera gain which induces higher level of noise and prioritizing exposure time which brings about undesirable motion blur. Though both paths can lead to satisfying outputs there is still a lack of direct comparison between them under the context of fast evolving deep learning algorithms which can be crucial to shed light on better ways for capturing and enhancement. In this paper we present a thorough study using state-of-the-art image and video enhancement frameworks comparing Gain-first Exposure-first and Mixed strategies on a large dataset collected by a special optical system so that three strategies can compete fairly under controlled conditions. Experiment results across multiple camera gain levels and exposure time settings as well as a theoretical analysis show advantages of Gain-first strategy over Exposure-first one under relatively small ratio and superiority of Mixed strategy under extreme low-light cases providing a basis for optimal videography and enhancement algorithm designs in the future.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Jiang_2024_CVPR, author = {Jiang, Haiyang and Zhong, Zhihang and Zheng, Yinqiang}, title = {Gain-first or Exposure-first: Benchmark for Better Low-light Video Photography and Enhancement}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {1345-1356} }