Editing Like Humans: A Contextual, Multimodal Framework for Automated Video Editing

Sharath Koorathota, Patrick Adelman, Kelly Cotton, Paul Sajda; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2021, pp. 1701-1709

Abstract


We propose an automated video editing model, which we term contextual and multimodal video editing (CMVE). The model leverages visual and textual metadata describing video clips, integrating essential information from both modalities, and uses a learned editing style from a single example video to coherently combine clips. The editing model is useful for tasks such as generating news clip montages and highlight reels given a text query that describes the video storyline. The model exploits the perceptual similarity between video frames, objects in videos and text descriptions to emulate coherent video editing. Amazon Mechanical Turk participants made judgements comparing CMVE to expert human editing. Experimental results showed no significant difference in the CMVE vs human edited video in terms of matching the text query and the level of interest each generates, suggesting CMVE is able to effectively integrate semantic information across visual and textual modalities and create perceptually coherent quality videos typical of human video editors. We publicly release an online demonstration of our method.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Koorathota_2021_CVPR, author = {Koorathota, Sharath and Adelman, Patrick and Cotton, Kelly and Sajda, Paul}, title = {Editing Like Humans: A Contextual, Multimodal Framework for Automated Video Editing}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {1701-1709} }