Permutation-Invariant Feature Restructuring for Correlation-Aware Image Set-Based Recognition

Xiaofeng Liu, Zhenhua Guo, Site Li, Lingsheng Kong, Ping Jia, Jane You, B.V.K. Vijaya Kumar; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 4986-4996

Abstract


We consider the problem of comparing the similarity of image sets with variable-quantity, quality and un-ordered heterogeneous images. We use feature restructuring to exploit the correlations of both inner&inter-set images. Specifically, the residual self-attention can effectively restructure the features using the other features within a set to emphasize the discriminative images and eliminate the redundancy. Then, a sparse/collaborative learning-based dependency-guided representation scheme reconstructs the probe features conditional to the gallery features in order to adaptively align the two sets. This enables our framework to be compatible with both verification and open-set identification. We show that the parametric self-attention network and non-parametric dictionary learning can be trained end-to-end by a unified alternative optimization scheme, and that the full framework is permutation-invariant. In the numerical experiments we conducted, our method achieves top performance on competitive image set/video-based face recognition and person re-identification benchmarks.

Related Material


[pdf]
[bibtex]
@InProceedings{Liu_2019_ICCV,
author = {Liu, Xiaofeng and Guo, Zhenhua and Li, Site and Kong, Lingsheng and Jia, Ping and You, Jane and Kumar, B.V.K. Vijaya},
title = {Permutation-Invariant Feature Restructuring for Correlation-Aware Image Set-Based Recognition},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}