Fast Forwarding Egocentric Videos by Listening and Watching

Vinicius S, Furlan, Ruzena Bajcsy, Erickson R. Nascimento; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2018, pp. 2504-2507

Abstract


The remarkable technological advance in well-equipped wearable devices is pushing an increasing production of long first-person videos. However, since most of these videos have long and tedious parts, they are forgotten or never seen. Despite a large number of techniques proposed to fast-forward these videos by highlighting relevant moments, most of them are image based only. Most of these techniques disregard other relevant sensors present in the current devices such as high-definition microphones. In this work, we propose a new approach to fast-forward videos using psychoacoustic metrics extracted from the soundtrack. These metrics can be used to estimate the annoyance of a segment allowing our method to emphasize moments of sound pleasantness. The efficiency of our method is demonstrated through qualitative results and quantitative results as far as of speed-up and instability are concerned.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{S_2018_CVPR_Workshops,
author = {S, Vinicius and Furlan, and Bajcsy, Ruzena and Nascimento, Erickson R.},
title = {Fast Forwarding Egocentric Videos by Listening and Watching},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2018}
}