Illuminant Chromaticity from Image Sequences

Veronique Prinet, Dani Lischinski, Michael Werman; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2013, pp. 3320-3327

Abstract


We estimate illuminant chromaticity from temporal sequences, for scenes illuminated by either one or two dominant illuminants. While there are many methods for illuminant estimation from a single image, few works so far have focused on videos, and even fewer on multiple light sources. Our aim is to leverage information provided by the temporal acquisition, where either the objects or the camera or the light source are/is in motion in order to estimate illuminant color without the need for user interaction or using strong assumptions and heuristics. We introduce a simple physically-based formulation based on the assumption that the incident light chromaticity is constant over a short space-time domain. We show that a deterministic approach is not sufficient for accurate and robust estimation: however, a probabilistic formulation makes it possible to implicitly integrate away hidden factors that have been ignored by the physical model. Experimental results are reported on a dataset of natural video sequences and on the GrayBall benchmark, indicating that we compare favorably with the state-of-the-art.

Related Material


[pdf]
[bibtex]
@InProceedings{Prinet_2013_ICCV,
author = {Prinet, Veronique and Lischinski, Dani and Werman, Michael},
title = {Illuminant Chromaticity from Image Sequences},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2013}
}