EgoSonics: Generating Synchronized Audio for Silent Egocentric Videos

Aashish Rai, Srinath Sridhar; Proceedings of the Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 4935-4946

Abstract


We introduce EgoSonics a method to generate semantically meaningful and synchronized audio tracks conditioned on silent egocentric videos. Generating audio for silent egocentric videos could open new applications in virtual reality assistive technologies or for augmenting existing datasets. Existing work has been limited to domains like speech music or impact sounds and cannot easily capture the broad range of audio frequencies found in egocentric videos. EgoSonics addresses these limitations by building on the strength of latent diffusion models for conditioned audio synthesis. We first encode and process audio and video data into a form that is suitable for generation. The encoded data is used to train our model to generate audio tracks that capture the semantics of the input video. Our proposed SyncroNet builds on top of ControlNet to provide control signals that enables temporal synchronization to the synthesized audio. Extensive evaluations show that our model outperforms existing work in audio quality and in our newly proposed synchronization evaluation method. Furthermore we demonstrate downstream applications of our model in improving video summarization.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Rai_2025_WACV, author = {Rai, Aashish and Sridhar, Srinath}, title = {EgoSonics: Generating Synchronized Audio for Silent Egocentric Videos}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {4935-4946} }