-
[pdf]
[bibtex]@InProceedings{Yamawaki_2022_ACCV, author = {Yamawaki, Kazuhiro and Han, Xian-Hua}, title = {Lightweight Hyperspectral Image Reconstruction Network with Deep Feature Hallucination}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV) Workshops}, month = {December}, year = {2022}, pages = {164-178} }
Lightweight Hyperspectral Image Reconstruction Network with Deep Feature Hallucination
Abstract
Hyperspectral image reconstruction from a compressive snapshot is an dispensable step in the advanced hyperspectral imaging systems to solve the low spatial and/or temporal resolution issue. Most existing methods extensively exploit various hand-crafted priors to regularize the ill-posed hyperspectral reconstruction problem, and are incapable of handling wide spectral variety, often resulting in poor reconstruction quality. In recent year, deep convolution neural network (CNN) has became the dominated paradigm for hyperspectral image reconstruction, and demonstrated superior performance with complicated and deep network architectures. However, the current impressive CNNs usually yield large model size and high computational cost, which limit the wide applicability in the real imaging systems. This study proposes a novel lightweight hyperspectral reconstruction network via effective deep feature hallucination, and aims to construct a practical model with small size and high efficiency for real imaging systems. Specifically, we exploit a deep feature hallucination module (DFHM) for duplicating more features with cheap operations as the main component, and stack multiple of them to compose the lightweight architecture. In detail, the DFHM consists of spectral hallucination block for synthesizing more channel of features and spatial context aggregation block for exploiting various scales of contexts, and then enhance the spectral and spatial modeling capability with more cheap operation than the vanilla convolution layer. Experimental results on two benchmark hyperspectral datasets demonstrate that our proposed method has great superiority over the state-of-the-art CNN models in reconstruction performance as well as model size.
Related Material