-
[pdf]
[supp]
[bibtex]@InProceedings{Vitasovic_2025_ICCV, author = {Vitasovic, Leo and Gra{\ss}hof, Stella and Kloft, Agnes Mercedes and Lehtola, Ville V. and Cunneen, Martin and Starostka, Justyna and Mcgarry, Glenn and Li, Kun and Brandt, Sami Sebastian}, title = {From Sound to Sight: Towards AI-authored Music Videos}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2025}, pages = {3792-3802} }
From Sound to Sight: Towards AI-authored Music Videos
Abstract
Conventional music visualisation systems rely on handcrafted ad hoc transformations of shapes and colours that offer only limited expressiveness. We propose two novel pipelines for automatically generating music videos from any user-specified, vocal or instrumental song using off-the-shelf deep learning models. Inspired by the manual workflows of music video producers, we experiment on how well latent feature-based techniques can analyse audio to detect musical qualities, such as emotional cues and instrumental patterns, and distil them into textual scene descriptions using a language model. Next, we employ a generative model to produce the corresponding video clips. To assess the generated videos, we identify several critical aspects and design and conduct a preliminary user evaluation that demonstrates storytelling potential, visual coherency and emotional alignment with the music. Our findings underscore the potential of latent feature techniques and deep generative models to expand music visualisation beyond traditional approaches.
Related Material
