Mapping from Speech to Images Using Continuous State Space Models |
|
Abstract | In this paper a system that transforms speech waveforms to
animated faces are proposed. The system relies on continuous state space models to perform the
mapping, this makes it possible to ensure video with no sudden jumps and allows continuous control of the parameters in 'face space'.
The performance of the system is critically dependent on the
number of hidden variables, with too few variables the model
cannot represent data, and with too many overfitting is noticed.
Simulations are performed on recordings of 3-5 sec.\ video
sequences with sentences from the Timit database. From a
subjective point of view the model is able to construct an image
sequence from an unknown noisy speech sequence even though the
number of training examples are limited. |
Type | Conference paper [With referee] |
Conference | Lecture Notes in Computer Science |
Editors | |
Year | 2005 Month January Vol. 3361 pp. 136 - 145 |
Publisher | Springer |
Electronic version(s) | [pdf] |
BibTeX data | [bibtex] |
IMM Group(s) | Intelligent Signal Processing |