A Multimodal Approach to Mapping Soundscapes

Tawfiq Salem, Menghua Zhai, Scott Workman, Nathan Jacobs; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2018, pp. 2524-2527

Abstract


We explore the problem of mapping soundscapes, that is, predicting the types of sounds that are likely to be heard at a given geographic location. Using a novel dataset, which includes geo-tagged audio and overhead imagery, we develop an approach for constructing an aural atlas, which captures the geospatial distribution of soundscapes. We build on previous work relating sound to ground-level imagery but incorporate overhead imagery to overcome the limitations of sparsely distributed geo-tagged audio. In the end, all that we require to construct an aural atlas is overhead imagery of the region of interest. We show examples of aural atlases at multiple spatial scales, from block-level to country.

Related Material


[pdf]
[bibtex]
@InProceedings{Salem_2018_CVPR_Workshops,
author = {Salem, Tawfiq and Zhai, Menghua and Workman, Scott and Jacobs, Nathan},
title = {A Multimodal Approach to Mapping Soundscapes},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2018}
}