Multimedia Mapping using Continuous State Space Models
|Abstract||In this paper a system that transforms speech waveforms to |
animated faces are proposed. The system relies on continuous state space models to perform the
mapping, this makes it possible to ensure video with no sudden jumps and allows continuous control of the parameters in 'face space'.
Simulations are performed on recordings of 3-5 sec. video sequences with sentences from the Timit database.
The model is able to
construct an image sequence from an unknown noisy speech sequence
fairly well even though the number of training examples are limited.
|Type||Conference paper [With referee]|
|Conference||IEEE 6'th Workshop on Multimedia Signal Processing Proceedings|
|Year||2004 Month June pp. 51--54|
|ISBN / ISSN||0-7803-8579-9|
|BibTeX data|| [bibtex]|
|IMM Group(s)||Intelligent Signal Processing|