VehicleNet: Learning Robust Feature Representation for Vehicle Re-identification

Zhedong Zheng, Tao Ruan, Yunchao Wei, Yi Yang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019, pp. 1-4


Vehicle re-identification (re-id) remains challenging due to significant intra-class variations across different cameras. In this paper, we present our solution to AICity Vehicle Re-id Challenge 2019. The limited training data motivates us to leverage the free data from the web and deploy the two-stage learning strategy. The success of large-scale datasets, i.e., ImageNet, inspires us to build a large-scale vehicle dataset called VehicleNet upon the public web data. Specifically, we combine the provided training set with other public vehicle datasets, i.e., VeRi-776, CompCar and VehicleID as VehicleNet. In the first stage, the training set is scaled up about 16 times, from 26,803 to 434,453 images. Despite the bias between different datasets, e.g., illumination and scene, VehicleNet generally provides the common knowledge of the vehicle, benefiting the deeply-learned model in learning the invariant representation towards different viewpoints. In the second stage, we further fine-tune the trained model only on the original training set. The second stage intends to minor the gap between VehicleNet and the original training set. Albeit simple, we achieve mAP 75.60% on the private testing set without extra information, e.g., temporal or spatial annotation of test data.

Related Material

author = {Zheng, Zhedong and Ruan, Tao and Wei, Yunchao and Yang, Yi},
title = {VehicleNet: Learning Robust Feature Representation for Vehicle Re-identification},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}