Exploiting Sparsity for Real Time Video Labelling

Lachlan Horne, Jose M. Alvarez, Nick Barnes; Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops, 2013, pp. 632-637

Abstract


Until recently, inference on fully connected graphs of pixel labels for scene understanding has been computationally expensive, so fast methods have focussed on neighbour connections and unary computation. However, with efficient CRF methods for inference on fully connected graphs, the opportunity exists for exploring other approaches. In this paper, we present a fast approach that calculates unary labels sparsely and relies on inference on fully connected graphs for label propagation. This reduces the unary computation which is now the most computationally expensive component. On a standard road scene dataset (CamVid), we show that accuarcy remains high when less than 0.15 percent of unary potentials are used. This achieves a reduction in computation by a factor of more than 750, with only small losses on global accuracy. This facilitates realtime processing on standard hardware that produces almost state-of-the-art results.

Related Material


[pdf]
[bibtex]
@InProceedings{Horne_2013_ICCV_Workshops,
author = {Lachlan Horne and Jose M. Alvarez and Nick Barnes},
title = {Exploiting Sparsity for Real Time Video Labelling},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {June},
year = {2013}
}