Diversity Regularized Spatiotemporal Attention for Video-Based Person Re-Identification
Shuang Li, Slawomir Bak, Peter Carr, Xiaogang Wang; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 369-378
Abstract
Video-based person re-identification matches video clips of people across non-overlapping cameras. Most existing methods tackle this problem by encoding each video frame in its entirety and computing an aggregate representation across all frames. In practice, people are often partially occluded, which can corrupt the extracted features. Instead, we propose a new spatiotemporal attention model that automatically discovers a diverse set of distinctive body parts. This allows useful information to be extracted from all frames without succumbing to occlusions and misalignments. The network learns multiple spatial attention models and employs a diversity regularization term to ensure multiple models do not discover the same body part. Features extracted from local image regions are organized by spatial attention model and are combined using temporal attention. As a result, the network learns latent representations of the face, torso and other body parts using the best available image patches from the entire video sequence. Extensive evaluations on three datasets show that our framework outperforms the state-of-the-art approaches by large margins on multiple metrics.
Related Material
[pdf]
[supp]
[arXiv]
[
bibtex]
@InProceedings{Li_2018_CVPR,
author = {Li, Shuang and Bak, Slawomir and Carr, Peter and Wang, Xiaogang},
title = {Diversity Regularized Spatiotemporal Attention for Video-Based Person Re-Identification},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}