SonicVisionLM: Playing Sound with Vision Language Models

Zhifeng Xie, Shengye Yu, Qile He, Mengtian Li; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 26866-26875

Abstract


There has been a growing interest in the task of generating sound for silent videos primarily because of its practicality in streamlining video post-production. However existing methods for video-sound generation attempt to directly create sound from visual representations which can be challenging due to the difficulty of aligning visual representations with audio representations. In this paper we present SonicVisionLM a novel framework aimed at generating a wide range of sound effects by leveraging vision-language models(VLMs). Instead of generating audio directly from video we use the capabilities of powerful VLMs. When provided with a silent video our approach first identifies events within the video using a VLM to suggest possible sounds that match the video content. This shift in approach transforms the challenging task of aligning image and audio into more well-studied sub-problems of aligning image-to-text and text-to-audio through the popular diffusion models. To improve the quality of audio recommendations with LLMs we have collected an extensive dataset that maps text descriptions to specific sound effects and developed a time-controlled audio adapter. Our approach surpasses current state-of-the-art methods for converting video to audio enhancing synchronization with the visuals and improving alignment between audio and video components. Project page: https://yusiissy.github.io/SonicVisionLM.github.io/

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Xie_2024_CVPR, author = {Xie, Zhifeng and Yu, Shengye and He, Qile and Li, Mengtian}, title = {SonicVisionLM: Playing Sound with Vision Language Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {26866-26875} }