CV-HAZOP: Introducing Test Data Validation for Computer Vision

Oliver Zendel, Markus Murschitz, Martin Humenberger, Wolfgang Herzner; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015, pp. 2066-2074

Abstract


Test data plays an important role in computer vision (CV) but is plagued by two questions: Which situations should be covered by the test data and have we tested enough to reach a conclusion? In this paper we propose a new solution answering these questions using a standard procedure devised by the safety community to validate complex systems: The Hazard and Operability Analysis (HAZOP). It is designed to systematically search and identify difficult, performance-decreasing situations and aspects. We introduce a generic CV model that creates the basis for the hazard analysis and, for the first time, apply an extensive HAZOP to the CV domain. The result is a publicly available checklist with more than 900 identified individual hazards. This checklist can be used to evaluate existing test datasets by quantifying the amount of covered hazards. We evaluate our approach by first analyzing and annotating the popular stereo vision test datasets Middlebury and KITTI. Second, we compare the performance of six popular stereo matching algorithms at the identified hazards from our checklist with their average performance and show, as expected, a clear negative influence of the hazards. The presented approach is a useful tool to evaluate and improve test datasets and creates a common basis for future dataset designs.

Related Material


[pdf]
[bibtex]
@InProceedings{Zendel_2015_ICCV,
author = {Zendel, Oliver and Murschitz, Markus and Humenberger, Martin and Herzner, Wolfgang},
title = {CV-HAZOP: Introducing Test Data Validation for Computer Vision},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}
}