Viewpoint-Aware Channel-Wise Attentive Network for Vehicle Re-Identification

Tsai-Shien Chen, Man-Yu Lee, Chih-Ting Liu, Shao-Yi Chien; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020, pp. 574-575

Abstract


Vehicle re-identification (re-ID) matches images of the same vehicle across different cameras. It is fundamentally challenging because the dramatically different appearance caused by different viewpoints would make the framework fail to match two vehicles of the same identity. Most existing works solved the problem by extracting viewpoint-aware feature via spatial attention mechanism, which, yet, usually suffers from noisy generated attention map or otherwise requires expensive keypoint labels to improve the quality. In this work, we propose Viewpoint-aware Channel-wise Attention Mechanism (VCAM) by observing the attention mechanism from a different aspect. Our VCAM enables the feature learning framework channel-wisely reweighing the importance of each feature maps according to the "viewpoint" of input vehicle. Extensive experiments validate the effectiveness of the proposed method and show that we perform favorably against state-of-the-arts methods on the public VeRi-776 dataset and obtain promising results on the 2020 AI City Challenge. We also conduct other experiments to demonstrate the interpretability of how our VCAM practically assists the learning framework.

Related Material


[pdf]
[bibtex]
@InProceedings{Chen_2020_CVPR_Workshops,
author = {Chen, Tsai-Shien and Lee, Man-Yu and Liu, Chih-Ting and Chien, Shao-Yi},
title = {Viewpoint-Aware Channel-Wise Attentive Network for Vehicle Re-Identification},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2020}
}