Temporal Extension Topology Learning for Video-based Person Re-Identification

Jiaqi Ning, Fei Li, Rujie Liu, Shun Takeuchi, Genta Suzuki; Proceedings of the Asian Conference on Computer Vision (ACCV) Workshops, 2022, pp. 207-219

Abstract


Video-based person re-identification aims to match the same identification from video clips captured by multiple non-overlapping cameras. By effectively exploiting both temporal and spatial clues of a video clip, a more comprehensive representation of the identity in the video clip can be obtained. In this manuscript, we propose a novel graph-based framework, referred as Temporal Extension Adaptive Graph Convolution (TE-AGC) which could effectively mine features in spatial and temporal dimensions in one graph convolution operation. Specifically, TE-AGC adopts a CNN backbone and a key-point detector to extract global and local features as graph nodes. Moreover, a delicate adaptive graph convolution module is designed, which encourages meaningful information transfer by dynamically learning the reliability of local features from multiple frames. Comprehensive experiments on two video person re-identification benchmark datasets have demonstrated the effectiveness and state-of-the-art performance of the proposed method.

Related Material


[pdf]
[bibtex]
@InProceedings{Ning_2022_ACCV, author = {Ning, Jiaqi and Li, Fei and Liu, Rujie and Takeuchi, Shun and Suzuki, Genta}, title = {Temporal Extension Topology Learning for Video-based Person Re-Identification}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV) Workshops}, month = {December}, year = {2022}, pages = {207-219} }