Quantitative Evaluation of Confidence Measures in a Machine Learning World
Matteo Poggi, Fabio Tosi, Stefano Mattoccia; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 5228-5237
Abstract
Confidence measures aim at detecting unreliable depth measurements and play an important role for many purposes and in particular, as recently shown, to improve stereo accuracy. This topic has been thoroughly investigated by Hu and Mordohai in 2010 (and 2012) considering 17 confidence measures and two local algorithms on the two datasets available at that time. However, since then major breakthroughs happened in this field: the availability of much larger and challenging datasets, novel and more effective stereo algorithms including ones based on deep-learning and confidence measures leveraging on machine learning techniques. Therefore, this paper aims at providing an exhaustive and updated review and quantitative evaluation of 52 (actually, 76 considering variants) state-of-the-art confidence measures - focusing on recent ones mostly based on random-forests and deep-learning - with three algorithms on the challenging datasets available today. Moreover we deal with problems inherently induced by learning-based confidence measures. How are these methods able to generalize to new data? How a specific training improves their effectiveness? How more effective confidence measures can actually improve the overall stereo accuracy?
Related Material
[pdf]
[supp]
[video]
[
bibtex]
@InProceedings{Poggi_2017_ICCV,
author = {Poggi, Matteo and Tosi, Fabio and Mattoccia, Stefano},
title = {Quantitative Evaluation of Confidence Measures in a Machine Learning World},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}