Helping Hands: An Object-Aware Ego-Centric Video Recognition Model

Chuhan Zhang, Ankush Gupta, Andrew Zisserman; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 13901-13912

Abstract


We introduce an object-aware decoder for improving the performance of spatio-temporal representations on ego-centric videos. The key idea is to enhance object-awareness during training by tasking the model to predict hand positions, object positions, and the semantic label of the objects using paired captions when available. At inference time the model only requires RGB frames as inputs, and is able to track and ground objects (although it has not been trained explicitly for this). We demonstrate the performance of the object-aware representations learnt by our model, by: (i) evaluating it for strong transfer, i.e, through zero-shot testing, on a number of downstream video-text retrieval and classification benchmarks; and (ii) by evaluating its temporal and spatial (grounding) performance by fine-tuning for this task. In all cases the performance improves over the state of the art -- even for networks trained with far larger batch sizes. Overall, we show that the model can act as a drop-in replacement for an ego-centric video model, and improve performance.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Zhang_2023_ICCV, author = {Zhang, Chuhan and Gupta, Ankush and Zisserman, Andrew}, title = {Helping Hands: An Object-Aware Ego-Centric Video Recognition Model}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {13901-13912} }