Lip Syncronisation Based on Wave Patterns

Pierre Steinmann Bach

AbstractThis report contains a short introduction to the basic elements relevant to the topic of lip synchronization as well a general explanation of the idea behind it and attempts to conclude whether it would be possible to implement a fully automated tool for lip synchronization between a arbitrary voice file and an animated head. It then presents 2 different approaches to the speech processing problem, one utilizing the traditional approach of Hidden Markov models and another using Linear Predicative Analysis. A look at the possible implementation of the facial mimics is analyzed as well and in addition to this an evaluation of the products currently on the market is carried out. Based on these things it can be concluded that a fully automated tool for lip synchronization can be implemented and is currently in existence on the market but depending on what the needs are the question of whether an in house implementation should be un-dertaken or an off the shelf product would be viable does not yield a conclusive answer.

In essence the choice stands between the 2 commercial products LifeStudio:HEAD, Lipsync 2.0 and an in house implementation of the LP based method with a parameterized approach to the facial animation.
TypeBachelor thesis [Academic thesis]
Year2005
PublisherInformatics and Mathematical Modelling, Technical University of Denmark, DTU
AddressRichard Petersens Plads, Building 321, DK-2800 Kgs. Lyngby
SeriesIMM-B.Eng-2005-8
NoteSupervised by Assoc. Prof. Bent Froehlke Nielsen, IMM
Electronic version(s)[pdf]
BibTeX data [bibtex]
IMM Group(s)Computer Science & Engineering