DisenQ: Disentangling Q-Former for Activity-Biometrics

Shehreen Azad, Yogesh Singh Rawat; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025, pp. 13502-13512

Abstract


In this work, we address activity-biometrics, which involves identifying individuals across diverse set of activities. Unlike traditional person identification, this setting introduces additional challenges as identity cues become entangled with motion dynamics and appearance variations, making biometrics feature learning more complex. While additional visual data like pose and/or silhouette help, they often struggle from extraction inaccuracies. To overcome this, we propose a multimodal language-guided framework that replaces reliance on additional visual data with structured textual supervision. At its core, we introduce **DisenQ** (**Disen**tangling **Q**-Former), a unified querying transformer that disentangles biometrics, motion, and non-biometrics features by leveraging structured language guidance. This ensures identity cues remain independent of appearance and motion variations, preventing misidentifications. We evaluate our approach on three activity-based video benchmarks, achieving state-of-the-art performance. Additionally, we demonstrate strong generalization to complex real-world scenario with competitive performance on a traditional video-based identification benchmark, showing the effectiveness of our framework.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Azad_2025_ICCV, author = {Azad, Shehreen and Rawat, Yogesh Singh}, title = {DisenQ: Disentangling Q-Former for Activity-Biometrics}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {13502-13512} }