Listening With Your Eyes: Towards a Practical Visual Speech Recognition System Using Deep Boltzmann Machines

Chao Sui, Mohammed Bennamoun, Roberto Togneri; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015, pp. 154-162

Abstract


This paper presents a novel feature learning method for visual speech recognition using Deep Boltzmann Machines (DBM). Unlike all existing visual feature extraction techniques which solely extracts features from video sequences, our method is able to explore both acoustic information and visual information to learn a better visual feature representation in the training stage. During the test stage, instead of using both audio and visual signals, only the videos are used for generating the missing audio feature, and both the given visual and given audio features are used to obtain a joint representation. We carried out our experiments on a large scale audio-visual data corpus, and experimental results show that our proposed techniques outperforms the performance of the hadncrafted features and features learned by other commonly used deep learning techniques.

Related Material


[pdf]
[bibtex]
@InProceedings{Sui_2015_ICCV,
author = {Sui, Chao and Bennamoun, Mohammed and Togneri, Roberto},
title = {Listening With Your Eyes: Towards a Practical Visual Speech Recognition System Using Deep Boltzmann Machines},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}
}