Unsupervised Generation of a Viewpoint Annotated Car Dataset From Videos

Nima Sedaghat, Thomas Brox; The IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1314-1322

Abstract


Object recognition approaches have recently been extended to yield, aside of the object class output, also viewpoint or pose. Training such approaches typically requires additional viewpoint or keypoint annotation in the training data or, alternatively, synthetic CAD models. In this paper,we present an approach that creates a dataset of images annotated with bounding boxes and viewpoint labels in a fully automated manner from videos. We assume that the scene is static in order to reconstruct 3D surfaces via structure from motion. We automatically detect when the reconstruction fails and normalize for the viewpoint of the 3D models by aligning the reconstructed point clouds. Exemplarily for cars we show that we can expand a large dataset of annotated single images and obtain improved performance when training a viewpoint regressor on this joined dataset.

Related Material


[pdf]
[bibtex]
@InProceedings{Sedaghat_2015_ICCV,
author = {Sedaghat, Nima and Brox, Thomas},
title = {Unsupervised Generation of a Viewpoint Annotated Car Dataset From Videos},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}
}