Semantic Speech Retrieval With a Visually Grounded Model of Untranscribed Speech

Herman Kamper, Gregory Shakhnarovich, Karen Livescu; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2018, pp. 2514-2517

Abstract


There is growing interest in speech models that can learn from unlabelled speech paired with visual context. Here we study how a visually grounded speech model, trained on images of scenes paired with spoken captions, captures aspects of semantics. We use an external image tagger to generate soft text labels from images, which serve as targets for a neural model that maps untranscribed speech to (semantic) keyword labels. We introduce a newly collected data set of human semantic relevance judgements and an associated task, semantic speech retrieval, where the goal is to search for spoken utterances that are semantically relevant to a given text query. Without seeing any text, the model trained on parallel speech and images achieves a precision of almost 60% on its top ten semantic retrievals. Compared to a supervised model trained on transcriptions, our model matches human judgements better by some measures, especially in retrieving non-verbatim semantic matches.

Related Material


[pdf]
[bibtex]
@InProceedings{Kamper_2018_CVPR_Workshops,
author = {Kamper, Herman and Shakhnarovich, Gregory and Livescu, Karen},
title = {Semantic Speech Retrieval With a Visually Grounded Model of Untranscribed Speech},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2018}
}