@CONFERENCE\{IMM2006-04487, author = "X. Zhou and L. Ross and T. Lehn-Schi{\o}ler and L. Parra", title = "Temporal visual cues aid speech recognition", year = "2006", booktitle = "7th Annual Meeting of the International Multisensory Research Forum", volume = "", series = "", editor = "", publisher = "", organization = "", address = "", url = "http://www2.compute.dtu.dk/pubdb/pubs/4487-full.html", abstract = "BACKGROUND: It is well known that under noisy conditions, viewing a speaker's articulatory movement aids the recognition of spoken words. Conventionally it is thought that the visual input disambiguates otherwise confusing auditory input. {HYPOTHESIS}: In contrast we hypothesize that it is the temporal synchronicity of the visual input that aids parsing of the auditory stream. More specifically, we expected that purely temporal information, which does not convey information such as place of articulation may facility word recognition. {METHODS}: To test this prediction we used temporal features of audio to generate an artificial talking-face video and measured word recognition performance on simple monosyllabic words. {RESULTS}: When presenting words together with the artificial video we find that word recognition is improved over purely auditory presentation. The effect is significant (p<0.01) for {SNR} at or above -12dB noise. For lower {SNR} the visual temporal information does not improve recognition confirming that our visual input does not contain useful lip-reading information in itself. {CONCLUSION}: Thus, we argue that temporal information is used in addition to articulatory features. This finding supports the notion that synchronous visual input aids auditory processing at an early parsing stage." }