Abstract is missing.
- The perception of mouthshape: photographic images of natural speech sounds can be perceived categoricallyRuth Campbell, Philip J. Benson, Simon B. Wallace. 1-4 [doi]
- Italian consonantal visemes: relationships between spatial/ temporal articulatory characteristics and coproduced acoustic signalEmanuela Magno Caldognetto, Claudio Zmarich, Piero Cosi, Franco Ferrero. 5-8 [doi]
- Negative effect of homophones on speechreading in JapaneseShizuo Hiki, Yumiko Fukuda. 9-12 [doi]
- Visual rhyming effects in deaf childrenJacqueline Leybaert, Daniela Marchetti. 13-16 [doi]
- Context sensitive facesIsabella Poggi, Catherine Pelachaud. 17-20 [doi]
- Effects of phonetic variation and the structure of the lexicon on the uniqueness of wordsEdward T. Auer Jr., Lynne E. Bernstein, R. S. Waldstein, P. E. Tucker. 21-24 [doi]
- A methodology to quantify the contribution of visual and prosodic information to the process of speech comprehensionLoredana Cerrato, Federico Albano Leoni, Andrea Paoloni. 25-28 [doi]
- The effects of speaking rate on visual speech intelligibilityJean-Pierre Gagné, Lina Boutin. 29-32 [doi]
- Micro- and macro-bimodalityEmanuela Magno Caldognetto, Isabella Poggi. 33-36 [doi]
- Can the visual input make the audio signal "pop out" in noise ? a first study of the enhancement of noisy VCV acoustic sequences by audio-visual fusionLaurent Girin, Jean-Luc Schwartz, Gang Feng. 37-40 [doi]
- Quantitative association of orofacial and vocal-tract shapesHani Yehia, Philip Rubin, Eric Vatikiotis-Bateson. 41-44 [doi]
- Phonological representaion and speech understanding with cochlear implants in deafened adultsBjörn Lyxell, Ulf Andersson, Stig Arlinger, Henrik Harder, Jerker Rönnberg. 45-48 [doi]
- Audio visual speech recognition and segmental master slave HMMRégine André-Obrecht, Bruno Jacob, Nathalie Parlangeau. 49-52 [doi]
- Combining noise compensation with visual information in speech recognitionStephen J. Cox, Iain Matthews, J. Andrew Bangham. 53-56 [doi]
- Neural architectures for sensorfusion in speechrecognitionGabi Krone, B. Talk, Andreas Wichert, Günther Palm. 57-60 [doi]
- Adaptive determination of audio and visual weights for automatic speech recognitionAlexandrina Rogozan, Paul Deléglise, Mamoun Alissali. 61-64 [doi]
- Speaker independent audio-visual database for bimodal ASRGerasimos Potamianos, Eric Cosatto, Hans Peter Graf, David B. Roe. 65-68 [doi]
- Word-dependent acoustic-labial weights in HMM-based speech recognitionPierre Jourlin. 69-72 [doi]
- Audio-visual speech perception without traditional speech cues: a second reportRobert E. Remez, Jennifer M. Fellowes, David B. Pisoni, Winston D. Goh, Philip Rubin. 73-76 [doi]
- Impairment of visual speech integration in prosopagnosiaBéatrice de Gelder, Nancy Etcoff, Jean Vroomen. 77-80 [doi]
- Audiovisual intelligibility of an androgynous speakerC. Schwippert, Christian Benoît. 81-84 [doi]
- Audiovisual speech perception in dyslexics: impaired unimodal perception but no audiovisual integration deficitRuth Campbell, A. Whittingham, U. Frith, Dominic W. Massaro, Michael M. Cohen. 85-88 [doi]
- Elucidating the complex relationships between phonetic perception and word recognition in audiovisual speech perceptionLynne E. Bernstein, Paul Iverson, Edward T. Auer Jr.. 89-92 [doi]
- The Japanese Mcgurk effect: the role of linguistic and cultural factors an auditory-visual speech perceptionDenis Burnham, Sheila Keane. 93-96 [doi]
- Auditory-visual interaction in voice localization and in bimodal speech recognition: the effects of desynchronizationPaul Bertelson, Jean Vroomen, Béatrice de Gelder. 97-100 [doi]
- Audiovisual fusion in finnish syllables and wordsMikko Sams, Veikko Surakka, Pia Helin, Riitta Kättö. 101-104 [doi]
- Analytical method for linguistic information of facial gestures in natural dialogue languagesAkira Ichikawa, Yoichiro Okada, Atsushi Imiya, K. Horiuchi. 105-108 [doi]
- An approach to face localization based on signature analysisBogdan Raducanu, Manuel Graña. 109-112 [doi]
- Preprocessing of visual speech under real world conditionsUwe Meier, Rainer Stiefelhagen, Me Yang. 113-116 [doi]
- An hybrid approach to orientation-free liptrackingLionel Revéret, Frederique Garcia, Christian Benoît, Eric Vatikiotis-Bateson. 117-120 [doi]
- Recovering 3d lip structure from 2d observations using a model trained from videoSumit Basu, Alex Pentland. 121-124 [doi]
- Interpreted multi-state lip models for audio-visual speech recognitionMichael Vogt. 125-128 [doi]
- Intelligibility of speech mediated by low frame-rate videoAnne H. Anderson, Art Blokland. 129-132 [doi]
- Lip synchronization of speechDavid F. McAllister, Robert D. Rodman, Donald L. Bitzer, Andrew S. Freeman. 133-136 [doi]
- Speech to lip movement synthesis by HMMEli Yamamoto, Satoshi Nakamura, Kiyohiro Shikano. 137-140 [doi]
- Videorealistic talking faces: a morphing approachTony Ezzat, Tomaso Poggio. 141-144 [doi]
- A French-speaking synthetic headBertrand Le Goff, Christian Benoît. 145-148 [doi]
- Animation of talking agentsJonas Beskow. 149-152 [doi]
- Video rewrite: visual speech synthesis from videoChristoph Bregler, Michele Covell, Malcolm Slaney. 153-156 [doi]