Semantic Depth Map Fusion for Moving Vehicle Detection in Aerial Video

Mahdieh Poostchi, Hadi Aliakbarpour, Raphael Viguier, Filiz Bunyak, Kannappan Palaniappan, Guna Seetharaman; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2016, pp. 32-40

Abstract


Automatic moving object detection and segmentation is one of the fundamental low-level tasks for many of the urban traffic surveillance applications. We develop an automatic moving vehicle detection system for aerial video based on semantic fusion of trace of the flux tensor and tall structures altitude mask. Trace of the flux tensor provides spatio-temporal information of moving edges including undesirable motion of tall structures caused by parallax effects. The parallax induced motions are filtered out by incorporating buildings altitude masks obtained from available dense 3D point clouds. Using a level-set based geodesic active contours framework, the coarse thresholded building depth masks evolved into the actual building boundaries. Experiments are carried out on a cropped 2kx2k region of interest for 200 frames from Albuquerque urban aerial imagery. An average precision of 83% and recall of 76% have been reported using an object-level detection performance evaluation method.

Related Material


[pdf]
[bibtex]
@InProceedings{Poostchi_2016_CVPR_Workshops,
author = {Poostchi, Mahdieh and Aliakbarpour, Hadi and Viguier, Raphael and Bunyak, Filiz and Palaniappan, Kannappan and Seetharaman, Guna},
title = {Semantic Depth Map Fusion for Moving Vehicle Detection in Aerial Video},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2016}
}