|Abstract||This demo describes how to generate facial expressions using a speech signal. |
The motivation for creating talking faces are (at least) threefold, firstly, the language synchronization of movies often leaves the actors mouth moving while there is silence or the other way around, this looks rather unnatural. If it was possible to manipulate the face of the actor to match the actual speech it would be much more pleasant to view synchronized movies (and cartoons). Secondly, even with increasing bandwidth sending images via the cell phone is quite expensive; therefore a system that allows a single image to be sent in the beginning of the conversation and then models the face corresponding to the speech would be useful. Thirdly, when producing agents on the computer (like Mr. Clips) it would make communication more plausible if the agent could interact with lip movements corresponding to the (automatically generated) speech.
The demo briefly explains the teqniques behind talking faces and show example animations.
|Keywords||Talking faces, state space models, active apperance models, aam, kalman filters|
|BibTeX data|| [bibtex]|
|IMM Group(s)||Intelligent Signal Processing|
Back :: IMM Publications