-
[pdf]
[bibtex]@InProceedings{Xu_2024_ACCV, author = {Xu, Meng and Lin, Mingying and Ren, Qi and Jia, Sen}, title = {SSTHyper: Sparse Spectral Transformer for Hyperspectral Image Reconstruction}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2024}, pages = {1918-1935} }
SSTHyper: Sparse Spectral Transformer for Hyperspectral Image Reconstruction
Abstract
Transformer-based methods have improved the quality of hyperspectral images (HSIs) reconstructed from RGB by effectively capturing their remote relationships. The self-attention mechanisms in existing Transformer models have not fully considered the spatial sparsity and spectral continuity characteristics of HSIs and fail to effectively filter out significant features, resulting in lower-quality reconstruction results. This paper proposes a sparse spectral transformer model for HSI reconstruction (SSTHyper) to address this limitation, adaptively preserving crucial features. The network consists of an encoder-decoder structure capable of learning shallow and deep spatial-spectral priors, primarily composed of sparse spectral self-attention groups. Introducing a sparse spectral self-attention mechanism allows adaptive masking of non-significant details, enhancing reconstruction accuracy. Meanwhile, a lightweight cross-level fusion network is proposed to reduce model parameters and computational costs to enhance spatial-spectral feature extraction. Experimental results on two benchmark datasets demonstrate the outstanding performance of the proposed method. The code will be released at https://github.com/MingyingLin/SSTHyper.
Related Material