- [pdf] [supp]
Efficient Multi-Exposure Image Fusion via Filter-Dominated Fusion and Gradient-Driven Unsupervised Learning
Multi exposure image fusion (MEF) aims to produce images with a high dynamic range of visual perception by integrating complementary information from different exposure levels, bypassing common sensors' physical limits. Despite the marvelous progress made by deep learning-based methods, few considerations have been given to the innovation of fusion paradigms, leading to insufficient model capacity utilization. This paper proposes a novel filter prediction-dominated fusion paradigm toward a simple yet effective MEF. Precisely, we predict a series of spatial-adaptive filters conditioned on the hierarchically represented features to perform an image-level dynamic fusion. The proposed paradigm has the following merits over the previous: 1) it circumvents the risk of information loss arising from the implicit encoding and decoding processes within the neural network, and 2) it better integrates local information to obtain better continuous spatial representations than the weight map-based paradigm. Furthermore, we propose a Gradient-driven Image Fidelity (GIF) loss for unsupervised MEF. Empowered by the exploitation of informative property in the gradient domain, GIF is able to implement a stable distortion-free optimization process. Experimental results demonstrate that our method achieves the best visual performance compared to the state-of-the-art while achieving an almost 30% improvement in inference time. The code is available at https://github.com/keviner1/FFMEF.