Recurrent Color Constancy

Yanlin Qian, Ke Chen, Jarno Nikkanen, Joni-Kristian Kamarainen, Jiri Matas; The IEEE International Conference on Computer Vision (ICCV), 2017, pp. 5458-5466


We introduce a novel formulation of temporal color constancy which considers multiple frames preceding the frame for which illumination is estimated. We propose an end-to-end trainable recurrent color constancy network -- the RCC-Net -- which exploits convolutional LSTMs and a simulated sequence to learn compositional representations in space and time. We use a standard single frame color constancy benchmark, the SFU Gray Ball Dataset, which can be adapted to a temporal setting. Extensive experiments show that the proposed method consistently outperforms single-frame state-of-the-art methods and their temporal variants.

Related Material

[pdf] [Supp]
author = {Qian, Yanlin and Chen, Ke and Nikkanen, Jarno and Kamarainen, Joni-Kristian and Matas, Jiri},
title = {Recurrent Color Constancy},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}