Visible-Infrared Person Re-Identification via Semantic Alignment and Affinity Inference

Xingye Fang, Yang Yang, Ying Fu; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 11270-11279

Abstract


Visible-infrared person re-identification (VI-ReID) focuses on matching the pedestrian images of the same identity captured by different modality cameras. The part-based methods achieve great success by extracting fine-grained features from feature maps. But most existing part-based methods employ horizontal division to obtain part features suffering from misalignment caused by irregular pedestrian movements. Moreover, most current methods use Euclidean or cosine distance of the output features to measure the similarity without considering the pedestrian relationships. Misaligned part features and naive inference methods both limit the performance of existing works. We propose a Semantic Alignment and Affinity Inference framework (SAAI), which aims to align latent semantic part features with the learnable prototypes and improve inference with affinity information. Specifically, we first propose semantic-aligned feature learning that employs the similarity between pixel-wise features and learnable prototypes to aggregate the latent semantic part features. Then, we devise an affinity inference module to optimize the inference with pedestrian relationships. Comprehensive experimental results conducted on the SYSU-MM01 and RegDB datasets demonstrate the favorable performance of our SAAI framework. Our code will be released at https://github.com/xiaoye-hhh/SAAI.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Fang_2023_ICCV, author = {Fang, Xingye and Yang, Yang and Fu, Ying}, title = {Visible-Infrared Person Re-Identification via Semantic Alignment and Affinity Inference}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {11270-11279} }