-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Li_2024_CVPR, author = {Li, Gen and Zhao, Kaifeng and Zhang, Siwei and Lyu, Xiaozhong and Dusmanu, Mihai and Zhang, Yan and Pollefeys, Marc and Tang, Siyu}, title = {EgoGen: An Egocentric Synthetic Data Generator}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {14497-14509} }
EgoGen: An Egocentric Synthetic Data Generator
Abstract
Understanding the world in first-person view is fundamental in Augmented Reality (AR). This immersive perspective brings dramatic visual changes and unique challenges compared to third-person views. Synthetic data has empowered third-person-view vision models but its application to embodied egocentric perception tasks remains largely unexplored. A critical challenge lies in simulating natural human movements and behaviors that effectively steer the embodied cameras to capture a faithful egocentric representation of the 3D world. To address this challenge we introduce EgoGen a new synthetic data generator that can produce accurate and rich ground-truth training data for egocentric perception tasks. At the heart of EgoGen is a novel human motion synthesis model that directly leverages egocentric visual inputs of a virtual human to sense the 3D environment. Combined with collision-avoiding motion primitives and a two-stage reinforcement learning approach our motion synthesis model offers a closed-loop solution where the embodied perception and movement of the virtual human are seamlessly coupled. Compared to previous works our model eliminates the need for a pre-defined global path and is directly applicable to dynamic environments. Combined with our easy-to-use and scalable data generation pipeline we demonstrate EgoGen's efficacy in three tasks: mapping and localization for head-mounted cameras egocentric camera tracking and human mesh recovery from egocentric views. EgoGen will be fully open-sourced offering a practical solution for creating realistic egocentric training data and aiming to serve as a useful tool for egocentric computer vision research.
Related Material