Equipping Diffusion Models with Differentiable Spatial Entropy for Low-Light Image Enhancement

Wenyi Lian, Wenjing Lian, Ziwei Luo; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 6671-6681

Abstract


Image restoration which aims to recover high-quality images from their corrupted counterparts often faces the challenge of being an ill-posed problem that allows multiple solutions for a single input. However most deep learning based works simply employ l1 loss to train their network in a deterministic way resulting in over-smoothed predictions with inferior perceptual quality. In this work we propose a novel method that shifts the focus from a deterministic pixel-by-pixel comparison to a statistical perspective emphasizing the learning of distributions rather than individual pixel values. The core idea is to introduce spatial entropy into the loss function to measure the distribution difference between predictions and targets. To make this spatial entropy differentiable we employ kernel density estimation (KDE) to approximate the probabilities for specific intensity values of each pixel with their neighbor areas. Specifically we equip the entropy with diffusion models and aim for superior accuracy and enhanced perceptual quality over l1 based noise matching loss. In the experiments we evaluate the proposed method for low light enhancement on two datasets and the NTIRE challenge 2024. All these results illustrate the effectiveness of our statistic-based entropy loss. Code is available at https://github.com/shermanlian/spatial-entropy-loss.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Lian_2024_CVPR, author = {Lian, Wenyi and Lian, Wenjing and Luo, Ziwei}, title = {Equipping Diffusion Models with Differentiable Spatial Entropy for Low-Light Image Enhancement}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {6671-6681} }