Adapting a Pedestrian Detector by Boosting LDA Exemplar Classifiers

Jiaolong Xu, David Vazquez, Sebastian Ramos, Antonio M. Lopez, Daniel Ponsa; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2013, pp. 688-693

Abstract


Training vision-based pedestrian detectors using synthetic datasets (virtual world) is a useful technique to collect automatically the training examples with their pixelwise ground truth. However, as it is often the case, these detectors must operate in real-world images, experiencing a significant drop of their performance. In fact, this effect also occurs among different real-world datasets, i.e. detectors' accuracy drops when the training data (source domain) and the application scenario (target domain) have inherent differences. Therefore, in order to avoid this problem, it is required to adapt the detector trained with synthetic data to operate in the real-world scenario. In this paper, we propose a domain adaptation approach based on boosting LDA exemplar classifiers from both virtual and real worlds. We evaluate our proposal on multiple real-world pedestrian detection datasets. The results show that our method can efficiently adapt the exemplar classifiers from virtual to real world, avoiding drops in average precision over the 15%.

Related Material


[pdf]
[bibtex]
@InProceedings{Xu_2013_CVPR_Workshops,
author = {Xu, Jiaolong and Vazquez, David and Ramos, Sebastian and Lopez, Antonio M. and Ponsa, Daniel},
title = {Adapting a Pedestrian Detector by Boosting LDA Exemplar Classifiers},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2013}
}