In this supplementary file, we provide the distance matrices for the four datasets that we used in the paper for the rank accuracy evaluation for each query-gallery ID pair. Due to the size limitation of the files, we provide one for frame-based dataset and one for video-based dataset. If the gallery and query has the same person ID, we exclude those gallery images/videos if they are from the same camera as the query example in our calculation of the pairwise distance. As existing works mostly compare with Rank-1 score only, we include the Rank-1 score in our paper, and readers can use the matrices attached to assess the other rank accuracies. This code requires a Python environment and two packages, NumPy and argparse, which are mostly available on all servers/PCs. If you want to create the environment from scratch, we recommend using Anaconda on a Linux server and use the scripts below.

conda create -n seed python=3.8
conda activate seed
pip install -r env.txt

To run the code for the rank-1 scores reported in the paper, please use the following command:

python calc_rank.py

For other rank values, please use the following command:

python calc_rank.py -r ${your_rank_value}

or

python calc_rank.py --rank ${your_rank_value}

e.g., command 'python calc_rank.py --rank 10' provides you the rank-10 score for all four datasets.
