Expressive Visual Text-to-Speech Using Active Appearance Models

Robert Anderson, Bjorn Stenger, Vincent Wan, Roberto Cipolla; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 3382-3389

Abstract


This paper presents a complete system for expressive visual text-to-speech (VTTS), which is capable of producing expressive output, in the form of a 'talking head', given an input text and a set of continuous expression weights. The face is modeled using an active appearance model (AAM), and several extensions are proposed which make it more applicable to the task of VTTS. The model allows for normalization with respect to both pose and blink state which significantly reduces artifacts in the resulting synthesized sequences. We demonstrate quantitative improvements in terms of reconstruction error over a million frames, as well as in large-scale user studies, comparing the output of different systems.

Related Material


[pdf]
[bibtex]
@InProceedings{Anderson_2013_CVPR,
author = {Anderson, Robert and Stenger, Bjorn and Wan, Vincent and Cipolla, Roberto},
title = {Expressive Visual Text-to-Speech Using Active Appearance Models},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2013}
}