DIV-FF: Dynamic Image-Video Feature Fields For Environment Understanding in Egocentric Videos

Lorenzo Mur-Labadia, Josechu Guerrero, Ruben Martinez-Cantin; Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), 2025, pp. 3470-3480

Abstract


Environment understanding in egocentric videos is an important step for applications like robotics, augmented reality and assistive technologies. These videos are characterized by dynamic interactions and a strong dependence on the wearer's engagement with the environment. Traditional approaches often focus on isolated clips or fail to integrate rich semantic and geometric information, limiting scene comprehension. We introduce Dynamic Image-Video Feature Fields (DIV-FF), a framework that decomposes the egocentric scene into persistent, dynamic, and actor-based components while integrating both image and video-language features. Our model enables detailed segmentation, captures affordances, understands the surroundings and maintains consistent understanding over time. DIV-FF outperforms state-of-the-art methods, particularly in dynamically evolving scenarios, demonstrating its potential to advance long-term, spatio-temporal scene understanding.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Mur-Labadia_2025_CVPR, author = {Mur-Labadia, Lorenzo and Guerrero, Josechu and Martinez-Cantin, Ruben}, title = {DIV-FF: Dynamic Image-Video Feature Fields For Environment Understanding in Egocentric Videos}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {3470-3480} }