-
[pdf]
[supp]
[bibtex]@InProceedings{Wang_2024_CVPR, author = {Wang, Mason Long and Sawata, Ryosuke and Clarke, Samuel and Gao, Ruohan and Wu, Shangzhe and Wu, Jiajun}, title = {Hearing Anything Anywhere}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {11790-11799} }
Hearing Anything Anywhere
Abstract
Recent years have seen immense progress in 3D computer vision and computer graphics with emerging tools that can virtualize real-world 3D environments for numerous Mixed Reality (XR) applications. However alongside immersive visual experiences immersive auditory experiences are equally vital to our holistic perception of an environment. In this paper we aim to reconstruct the spatial acoustic characteristics of an arbitrary environment given only a sparse set of (roughly 12) room impulse response (RIR) recordings and a planar reconstruction of the scene a setup that is easily achievable by ordinary users. To this end we introduce DiffRIR a differentiable RIR rendering framework with interpretable parametric models of salient acoustic features of the scene including sound source directivity and surface reflectivity. This allows us to synthesize novel auditory experiences through the space with any source audio. To evaluate our method we collect a dataset of RIR recordings and music in four diverse real environments. We show that our model outperforms state-of-the-art baselines on rendering monaural and binaural RIRs and music at unseen locations and learns physically interpretable parameters characterizing acoustic properties of the sound source and surfaces in the scene.
Related Material