-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Ramazanova_2025_CVPR, author = {Ramazanova, Merey and Pardo, Alejandro and Alwassel, Humam and Ghanem, Bernard}, title = {Exploring Missing Modality in Multimodal Egocentric Datasets}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops}, month = {June}, year = {2025}, pages = {75-85} }
Exploring Missing Modality in Multimodal Egocentric Datasets
Abstract
Multimodal video understanding is crucial for analyzing egocentric videos, where integrating multiple sensory signals significantly enhances action recognition and moment localization. However, practical applications often grapple with incomplete modalities due to factors like privacy concerns, efficiency demands, or hardware malfunctions. Addressing this, our study delves into the impact of missing modalities on egocentric action recognition, particularly within transformer-based models. We introduce a novel concept--Missing Modality Token (MMT)--to maintain performance even when modalities are absent, a strategy that proves effective in the Ego4D, Epic-Kitchens, and Epic-Sounds datasets. Our method mitigates the performance loss, reducing it from its original ~30% drop to only ~10% when half of the test set is modal-incomplete. Through extensive experimentation, we demonstrate the adaptability of MMT to different training scenarios and its superiority in handling missing modalities compared to current methods. Our research contributes a comprehensive analysis and an innovative approach, opening avenues for more resilient multimodal systems in real-world settings.
Related Material