In Defense of Shallow Learned Spectral Reconstruction From RGB Images

Jonas Aeschbacher, Jiqing Wu, Radu Timofte; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 471-479

Abstract


Very recent Galliani et al. proposed a method using a very deep CNN architecture or learned spectral reconstruction and showed large improvements over the recent sparse coding method of Arad et al. In this paper we defend the shallow learned spectral reconstruction methods by: (i) first, reimplementing Arad and showing that it can achieve significantly better results than those originally reported; (ii) second, introducing a novel shallow method based on A+ of Timofte et al. from super-resolution that substantially improves over Arad and, moreover, provides comparable performance to Galliani's very deep CNN method on three standard benchmarks (ICVL, CAVE, and NUS); and (iii) finally, arguing that the train and runtime efficiency as well as the clear relation between its parameters and the achieved performance makes from our shallow A+ a strong baseline for further research in learned spectral reconstruction from RGB images. Moreover, our shallow A+ (as well as Arad) requires and uses significantly smaller train data than Galliani (and generally the CNN approaches), is robust to overfitting and is easily deployable by fast training to newer cameras.

Related Material


[pdf]
[bibtex]
@InProceedings{Aeschbacher_2017_ICCV,
author = {Aeschbacher, Jonas and Wu, Jiqing and Timofte, Radu},
title = {In Defense of Shallow Learned Spectral Reconstruction From RGB Images},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2017}
}