Abstract is missing.
- How to create a look-a-like avatar pipeline using low-cost equipmentVerónica Costa Orvalho. [doi]
- Perceived emotionality of linear and non-linear AUs synthesised using a 3d dynamic morphable facial modelDarren Cosker, Eva Krumhuber, Adrian Hilton.
- Audiovisual binding in speech perceptionJean-Luc Schwartz. [doi]
- The perceived sequence of consonants in mcgurk combination illusions depends on syllabic stressBo Holm-Rasmussen, Tobias Andersen.
- Head movements, eyebrows, and phonological prosodic prominence levels in Stockholm Swedish news broadcastsGilbert Ambrazaitis, Malin Svensson Lundmark, David House.
- An artistic and tool-driven approach for believable digital charactersVolker Helzle. [doi]
- Visio-articulatory to acoustic conversion of speechMichael Pucher, Dietmar Schabus.
- Interface for monitoring of engagement from audio-visual cuesJoão Paulo Cabral, Yuyun Huang, Christy Elias, Ketong Su, Nick Campbell.
- Visual lip information supports auditory word segmentationAntje Strauß, Christophe Savariaux, Sonia Kandel, Jean-Luc Schwartz.
- Auditory-visual perception of VCVs produced by people with down syndrome: a preliminary studyAlexandre Hennequin, Amélie Rochet-Capellan, Marion Dohen.
- From text-to-speech (TTS) to talking head - a machine learning approach to A/V speech modeling and renderingFrank K. Soong, Lijuan Wang. [doi]
- Boxing the face: a comparison of dynamic facial databases used in facial analysis and animationPasquale Dente, Dennis Küster, Eva Krumhuber.
- Dynamics of audiovisual binding in elderly populationGanesh Attigodu Chandrashekara, Frédéric Berthommier, Jean-Luc Schwartz.
- An answer to a naïve question to the McGurk effect: why does audio b give more d percepts with visual g than with visual d?"Tobias Andersen.
- Visual cues to phrase segmentation and the acquisition of word orderIrene De la Cruz-Pavía, Michael McAuliffe, Janet F. Werker, Judit Gervain, Eric Vatikiotis-Bateson.
- Environmental, linguistic, and developmental influences on mothers? speech to children: an examination of audible and visible propertiesNicholas Smith, Timothy Vallier, Bob McMurray, Christine Hammans, Julia Garrick.
- Children's spontaneous emotional expressions while receiving (un)wanted prizes in the presence of peersMandy Visser, Emiel Krahmer, Marc Swerts. 1-6 [doi]
- You can raise your eyebrows, i don't mind: are monolingual and bilingual infants equally good at learning from the eyes region of a talking face?Mathilde Fort, Anira Escrichs, Alba Ayneto-Gimeno, Núria Sebastián-Gallés. 7-11 [doi]
- Comparison of visual speech perception of sampled-based talking heads: adults and children with and without developmental dyslexiaPaula D. Paro Costa, Daniella Batista, Mayara Toffoli, Keila A. Baraldi Knobel, Cíntia Alves Salgado, José Mario De Martino. 12-16 [doi]
- Cross-modality matching of linguistic prosody in older and younger adultsSimone Simonetti, Jeesun Kim, Chris Davis. 17-21 [doi]
- "i do not see what you are saying": reduced visual influence on mulimodal speech integration in children with SLIAurélie Huyse, Frédéric Berthommier, Jacqueline Leybaert. 22-27 [doi]
- Message vs. messenger effects on cross-modal matching for spoken phrasesCatherine T. Best, Christian Kroos, Karen E. Mulak, Shaun Halovic, Mathilde Fort, Christine Kitamura. 28-33 [doi]
- Audiovisual generation of social attitudes from neutral stimuliAdela Barbulescu, Gérard Bailly, Rémi Ronfard, Maël Pouget. 34-39 [doi]
- Delayed auditory feedback with static and dynamic visual feedbackElizabeth Stelle, Caroline L. Smith, Eric Vatikiotis-Bateson. 40-45 [doi]
- Visual vs. auditory emotion information: how language and culture affect our bias towards the different modalitiesChee Seng Chong, Jeesun Kim, Chris Davis. 46-51 [doi]
- Anticipation of turn-Switching in auditory-visual dialogsHansjörg Mixdorff, Angelika Hönemann, Jeesun Kim, Chris Davis. 52-56 [doi]
- Comparison of multisensory display rules in expressing complex emotions between culturesSachiko Takagi, Shiho Miyazawa, Elisabeth Huis In 't Veld, Béatrice de Gelder, Akihiro Tanaka. 57-62 [doi]
- Towards the development of facial and vocal expression database in east Asian and Western culturesAkihiro Tanaka, Sachiko Takagi, Saori Hiramatsu, Elisabeth Huis In 't Veld, Béatrice de Gelder. 63-66 [doi]
- The effect of modality and speaking style on the discrimination of non-native phonological and phonetic contrasts in noiseSarah Fenwick, Chris Davis, Catherine T. Best, Michael D. Tyler. 67-72 [doi]
- Audio-visual perception of Mandarin lexical tones in AX same-different judgment taskRui Wang, Biao Zeng, Simon Thompson. 73-77 [doi]
- Lip animation synthesis: a unified framework for speaking and laughing virtual agentYu Ding, Catherine Pelachaud. 78-83 [doi]
- Comparison of dialect models and phone mappings in HSMM-based visual dialect speech synthesisDietmar Schabus, Michael Pucher. 84-87 [doi]
- HMM-based visual speech synthesis using dynamic visemesAusdang Thangthai, Barry-John Theobald. 88-92 [doi]
- Investigating the impact of artificial enhancement of lip visibility on the intelligibility of spectrally-distorted speechNajwa Alghamdi, Steve Maddock, Guy J. Brown, Jon Barker. 93-98 [doi]
- The stability of mouth movements for multiple talkers over multiple sessionsChris Davis, Jeesun Kim, Vincent Aubanel, Gregory Zelic, Yatin Mahajan. 99-102 [doi]
- Voicing classification of visual speech using convolutional neural networksThomas Le Cornu, Ben Milner. 103-108 [doi]
- Comparison of single-model and multiple-model prediction-based audiovisual fusionStavros Petridis, Varun Rajgarhia, Maja Pantic. 109-114 [doi]
- Finding phonemes: improving machine lip-readingHelen L. Bear, Richard Harvey, Yuxuan Lan. 115-120 [doi]
- Discovering patterns in visual speechStephen Cox. 121-126 [doi]
- Improving lip-reading performance for robust audiovisual speech recognition using DNNsKwanchiva Thangthai, Richard Harvey, Stephen J. Cox, Barry-John Theobald. 127-131 [doi]
- Explaining the visual and masked-visual advantage in speech perception in noise: the role of visual phonetic cuesVincent Aubanel, Chris Davis, Jeesun Kim. 132-136 [doi]
- Analysing the importance of different visual feature coefficientsDanny Websdale, Ben Milner. 137-142 [doi]
- Auditory and audiovisual close-shadowing in normal and cochlear-implanted hearing impaired subjectsLucie Scarbel, Denis Beautemps, Jean-Luc Schwartz, Marc Sato. 143-146 [doi]
- The multi-modal nature of trustworthiness perceptionElena Tsankova, Eva Krumhuber, Andrew J. Aubrey, Arvid Kappas, Guido Möllering, A. David Marshall, Paul L. Rosin. 147-152 [doi]
- Combining acoustic and visual features to detect laughter in adults' speechHrishikesh Rao, Zhefan Ye, Yin Li, Mark A. Clements, Agata Rozga, James M. Rehg. 153-156 [doi]
- 4D Cardiff Conversation Database (4D CCDb): a 4D database of natural, dyadic conversationsJason Vandeventer, Andrew J. Aubrey, Paul L. Rosin, A. David Marshall. 157-162 [doi]
- Integration of auditory, labial and manual signals in cued speech perception by deaf adults: an adaptation of the McGurk paradigmClémence Bayard, Cécile Colin, Jacqueline Leybaert. 163-168 [doi]
- Improved visual speech synthesis using dynamic viseme k-means clustering and decision treesChristiaan Rademan, Thomas Niesler. 169-174 [doi]
- Scattering vs. discrete cosine transform features in visual speech processingEtienne Marcheret, Gerasimos Potamianos, Josef Vopicka, Vaibhava Goel. 175-180 [doi]
- Stream weight estimation using higher order statistics in multi-modal speech recognitionKazuto Ukai, Satoshi Tamura, Satoru Hayamizu. 181-184 [doi]
- Optimal timing of audio-visual text presentation: the role of attentionMaiko Takahashi, Akihiro Tanaka. 185-189 [doi]
- Speaker-independent machine lip-reading with speaker-dependent viseme classifiersHelen L. Bear, Stephen J. Cox, Richard W. Harvey. 190-195 [doi]
- Face-speech sensor fusion for non-invasive stress detectionVasudev Bethamcherla, Will Paul, Cecilia Ovesdotter Alm, Reynold J. Bailey, Joe Geigel, Linwei Wang. 196-201 [doi]
- Classification of auditory-visual attitudes in GermanAngelika Hönemann, Hansjörg Mixdorff, Albert Rilliard. 202-207 [doi]
- The development of patterns of gaze to a speaking faceJulia Irwin, Lawrence Brancazio. 208-212 [doi]