Zero-shot keyword spotting for visual speech recognition in-the-wild
Themos Stafylakis, Georgios Tzimiropoulos; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 513-529
Abstract
Visual keyword spotting (KWS) is the problem of estimating whether a text query occurs in a given recording using only video information. This paper focuses on visual KWS for words unseen during training, a real-world, practical setting which so far has received no attention by the community. To this end, we devise an end-to-end architecture comprising (a) a state-of-the-art visual feature extractor based on spatiotemporal Residual Networks, (b) a grapheme-to-phoneme model based on sequence-to-sequence neural networks, and (c) a stack of recurrent neural networks which learn how to correlate visual features with the keyword representation. Different to prior works on KWS, which try to learn word representations merely from sequences of graphemes (i.e. letters), we propose the use of a grapheme-to-phoneme encoder-decoder model which learns how to map words to their pronunciation. We demonstrate that our system obtains very promising visual-only KWS results on the challenging LRS2 database, for keywords unseen during training. We also show that our system outperforms a baseline which addresses KWS via automatic speech recognition (ASR), while it drastically improves over other recently proposed ASR-free KWS methods.
Related Material
[pdf]
[arXiv]
[
bibtex]
@InProceedings{Stafylakis_2018_ECCV,
author = {Stafylakis, Themos and Tzimiropoulos, Georgios},
title = {Zero-shot keyword spotting for visual speech recognition in-the-wild},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}