-
[pdf]
[supp]
[bibtex]@InProceedings{Hu_2026_WACV, author = {Hu, Yixuan and Xue, Yuxuan and Klenk, Simon and Cremers, Daniel and Pons-Moll, Gerard}, title = {ControlEvents: Controllable Synthesis of Event Camera Data with Foundational Prior from Image Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {March}, year = {2026}, pages = {5509-5519} }
ControlEvents: Controllable Synthesis of Event Camera Data with Foundational Prior from Image Diffusion Models
Abstract
Event cameras have gained significant attention due to their bio-inspired properties, such as high temporal resolution and high dynamic range. However, obtaining large-scale labeled ground-truth data for event-based vision tasks remains challenging and costly, which hinders the development of the event-based algorithm. In this paper, we present ControlEvents, a diffusion-based generative model designed to synthesize unlimited high-quality event data guided by diverse control signals such as class text labels, 2D skeletons, and 3D body poses. Our key insight is to leverage the diffusion prior from foundation models, such as Stable Diffusion, enabling high-quality event data generation with minimal fine-tuning and limited labeled data. Our method streamlines the data generation process and significantly reduces the cost of producing labeled event datasets. We demonstrate the effectiveness of our approach by synthesizing event data for visual recognition, 2D skeleton estimation, and 3D body pose estimation. Our experiments show that the synthesized labeled event data enhances model performance in all tasks. Additionally, our approach can generate events based on unseen text labels during training, illustrating the powerful text-based generation capabilities inherited from foundation models.
Related Material
