Video-Based Person Re-Identification by Deep Feature Guided Pooling

Youjiao Li, Li Zhuo, Jiafeng Li, Jing Zhang, Xi Liang, Qi Tian; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2017, pp. 39-46


Person re-identification (re-id) aims to match a specific person across non-overlapping views of different cameras, which is currently one of the hot topics in computer vision. Compared with image-based person re-id, video-based techniques could achieve better performance by fully utilizing the space-time information. This paper presents a novel video-based person re-id method named Deep Feature Guided Pooling (DFGP), which can take full advantage of the space-time information. The contributions of the method are in the following aspects: (1) PCA-based convolutional network (PCN), a lightweight deep learning network, is trained to generate deep features of video frames. Deep features are aggregated by average pooling to obtain person deep feature vectors. The vectors are utilized to guide the generation of human appearance features, which makes the appearance features robust to the severe noise in videos. (2) Hand-crafted local features of videos are aggregated by max pooling to reinforce the motion variations of different persons. In this way, the human descriptors are more discriminative. (3) The final human descriptors are composed of deep features and hand-crafted local features to take their own advantages and the performance of identification is promoted. Experimental results show that our approach outperforms six other state-of-the-art video-based methods on the challenging PRID 2011 and iLIDS-VID video-based person re-id datasets.

Related Material

author = {Li, Youjiao and Zhuo, Li and Li, Jiafeng and Zhang, Jing and Liang, Xi and Tian, Qi},
title = {Video-Based Person Re-Identification by Deep Feature Guided Pooling},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {July},
year = {2017}