Video Action Recognition with Attentive Semantic Units

Yifei Chen, Dapeng Chen, Ruijin Liu, Hao Li, Wei Peng; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 10170-10180

Abstract


Visual-Language Models (VLMs) have significantly advanced video action recognition. Supervised by the semantics of action labels, recent works adapt the visual branch of VLMs to learn video representations. Despite the effectiveness proved by these works, we believe that the potential of VLMs has yet to be fully harnessed. In light of this, we exploit the semantic units (SU) hiding behind the action labels and leverage their correlations with fine-grained items in frames for more accurate action recognition. SUs are entities extracted from the language descriptions of the entire action set, including body parts, objects, scenes, and motions. To further enhance the alignments between visual contents and the SUs, we introduce a multi-region module (MRA) to the visual branch of the VLM. The MRA allows the perception of region-aware visual features beyond the original global feature. Our method adaptively attends to and selects relevant SUs with visual features of frames. With a cross-modal decoder, the selected SUs serve to decode spatiotemporal video representations. In summary, the SUs as the medium can boost discriminative ability and transferability. Specifically, in fully-supervised learning, our method achieved 87.8% top-1 accuracy on Kinetics-400. In K=2 few-shot experiments, our method surpassed the previous state-of-the-art by +7.1% and +15.0% on HMDB-51 and UCF-101, respectively.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Chen_2023_ICCV, author = {Chen, Yifei and Chen, Dapeng and Liu, Ruijin and Li, Hao and Peng, Wei}, title = {Video Action Recognition with Attentive Semantic Units}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {10170-10180} }