Abstract is missing.
- Natural interfaces in the field: the case of pen and paperPhilip R. Cohen. 1-2 [doi]
- Manipulating trigonometric expressions encodedthrough electro-tactile signalsTatiana Evreinova. 3-8 [doi]
- Multimodal system evaluation using modality efficiency and synergy metricsManolis Perakakis, Alexandros Potamianos. 9-16 [doi]
- Effectiveness and usability of an online help agent embodied as a talking headJérôme Simonin, Noëlle Carbonell, Danielle Pelé. 17-20 [doi]
- Interaction techniques for the analysis of complex data on high-resolution displaysChreston Miller, Ashley Robinson, Rongrong Wang, Pak Chung, Francis K. H. Quek. 21-28 [doi]
- Role recognition in multiparty recordings using social affiliation networks and discrete distributionsSarah Favre, Hugues Salamin, John Dines, Alessandro Vinciarelli. 29-36 [doi]
- Audiovisual laughter detection based on temporal featuresStavros Petridis, Maja Pantic. 37-44 [doi]
- Predicting two facets of social verticality in meetings from five-minute time slices and nonverbal cuesDinesh Babu Jayagopi, Sileye O. Ba, Jean-Marc Odobez, Daniel Gatica-Perez. 45-52 [doi]
- Multimodal recognition of personality traits in social interactionsFabio Pianesi, Nadia Mana, Alessandro Cappelletti, Bruno Lepri, Massimo Zancanaro. 53-60 [doi]
- Social signals, their function, and automatic analysis: a surveyAlessandro Vinciarelli, Maja Pantic, Hervé Bourlard, Alex Pentland. 61-68 [doi]
- VoiceLabel: using speech to label mobile sensor dataSusumu Harada, Jonathan Lester, Kayur Patel, T. Scott Saponas, James Fogarty, James A. Landay, Jacob O. Wobbrock. 69-76 [doi]
- The babbleTunes system: talk to your ipod!Jan Schehl, Alexander Pfalzgraf, Norbert Pfleger, Jochen Steigner. 77-80 [doi]
- Evaluating talking heads for smart home systemsChristine Kühnel, Benjamin Weiss, Ina Wechsung, Sascha Fagel, Sebastian Möller. 81-84 [doi]
- Perception of dynamic audiotactile feedback to gesture inputTeemu Tuomas Ahmaniemi, Vuokko Lantz, Juha Marila. 85-92 [doi]
- An integrative recognition method for speech and gesturesMadoka Miki, Chiyomi Miyajima, Takanori Nishino, Norihide Kitaoka, Kazuya Takeda. 93-96 [doi]
- As go the feet...: on the estimation of attentional focus from stanceFrancis K. H. Quek, Roger W. Ehrich, Thurmon Lockhart. 97-104 [doi]
- Knowledge and data flow architecture for reference processing in multimodal dialog systemsAli Choumane, Jacques Siroux. 105-108 [doi]
- The CAVA corpus: synchronised stereoscopic and binaural datasets with head movementsElise Arnaud, Heidi Christensen, Yan-Chen Lu, Jon Barker, Vasil Khalidov, Miles E. Hansard, Bertrand Holveck, Hervé Mathieu, Ramya Narasimha, Elise Taillant, Florence Forbes, Radu Horaud. 109-116 [doi]
- Towards a minimalist multimodal dialogue framework using recursive MVC patternLi Li, Wu Chou. 117-120 [doi]
- Explorative studies on multimodal interaction in a PDA- and desktop-based scenarioAndreas Ratzka. 121-128 [doi]
- Designing context-aware multimodal virtual environmentsLode Vanacken, Joan De Boeck, Chris Raymaekers, Karin Coninx. 129-136 [doi]
- A high-performance dual-wizard infrastructure for designing speech, pen, and multimodal interfacesPhilip R. Cohen, Colin Swindells, Sharon L. Oviatt, Alexander M. Arthur. 137-140 [doi]
- The WAMI toolkit for developing, deploying, and evaluating web-accessible multimodal interfacesAlexander Gruenstein, Ian McGraw, Ibrahim Badr. 141-148 [doi]
- A three-dimensional characterization space of software components for rapidly developing multimodal interfacesMarcos Serrano, David Juras, Laurence Nigay. 149-156 [doi]
- Crossmodal congruence: the look, feel and sound of touchscreen widgetsEve E. Hoggan, Topi Kaaresoja, Pauli Laitinen, Stephen A. Brewster. 157-164 [doi]
- MultiML: a general purpose representation language for multimodal human utterancesManuel Giuliani, Alois Knoll. 165-172 [doi]
- Deducing the visual focus of attention from head pose estimation in dynamic multi-view meeting scenariosMichael Voit, Rainer Stiefelhagen. 173-180 [doi]
- Context-based recognition during human interactions: automatic feature selection and encoding dictionaryLouis-Philippe Morency, Iwan de Kok, Jonathan Gratch. 181-188 [doi]
- AcceleSpell, a gestural interactive game to learn and practice finger spellingJosé Luis Hernandez-Rebollar, Ethar Ibrahim Elsakay, José D. Alanís-Urquieta. 189-190 [doi]
- A multi-modal spoken dialog system for interactive TVRajesh Balchandran, Mark E. Epstein, Gerasimos Potamianos, Ladislav Serédi. 191-192 [doi]
- Multimodal slideshow: demonstration of the openinterface interaction development environmentDavid Juras, Laurence Nigay, Michael Ortega, Marcos Serrano. 193-194 [doi]
- A browser-based multimodal interaction systemKouichi Katsurada, Teruki Kirihata, Masashi Kudo, Junki Takada, Tsuneo Nitta. 195-196 [doi]
- IGlasses: an automatic wearable speech supplementin face-to-face communication and classroom situationsDominic W. Massaro, Miguel Á. Carreira-Perpiñán, David J. Merrill, Cass Sterling, Stephanie Bigler, Elise Piazza, Marcus Perlman. 197-198 [doi]
- Innovative interfaces in MonAMI: the reminderJonas Beskow, Jens Edlund, Teodore Gjermani, Björn Granström, Joakim Gustafson, Oskar Jonsson, Gabriel Skantze, Helena Tobiasson. 199-200 [doi]
- PHANTOM prototype: exploring the potential for learning with multimodal features in dentistryJonathan Padilla San Diego, Alastair Barrow, Margaret Cox, William Harwin. 201-202 [doi]
- Audiovisual 3d rendering as a tool for multimodal interfacesGeorge Drettakis. 203-204 [doi]
- Multimodal presentation and browsing of musicDavid Damm, Christian Fremerey, Frank Kurth, Meinard Müller, Michael Clausen. 205-208 [doi]
- An audio-haptic interface based on auditory depth cuesDelphine Devallez, Federico Fontana, Davide Rocchesso. 209-216 [doi]
- Detection and localization of 3d audio-visual objects using unsupervised clusteringVasil Khalidov, Florence Forbes, Miles E. Hansard, Elise Arnaud, Radu Horaud. 217-224 [doi]
- Robust gesture processing for multimodal interactionSrinivas Bangalore, Michael Johnston. 225-232 [doi]
- Investigating automatic dominance estimation in groups from visual attention and speaking activityHayley Hung, Dinesh Babu Jayagopi, Sileye O. Ba, Jean-Marc Odobez, Daniel Gatica-Perez. 233-236 [doi]
- Dynamic modality weighting for multi-stream hmms inaudio-visual speech recognitionMihai Gurban, Jean-Philippe Thiran, Thomas Drugman, Thierry Dutoit. 237-240 [doi]
- A Fitts Law comparison of eye tracking and manual input in the selection of visual targetsRoel Vertegaal. 241-248 [doi]
- A Wizard of Oz study for an AR multimodal interfaceMinkyung Lee, Mark Billinghurst. 249-256 [doi]
- A realtime multimodal system for analyzing group meetings by combining face pose tracking and speaker diarizationKazuhiro Otsuka, Shoko Araki, Kentaro Ishizuka, Masakiyo Fujimoto, Martin Heinrich, Junji Yamato. 257-264 [doi]
- Designing and evaluating multimodal interaction for mobile contextsSaija Lemmelä, Akos Vetek, Kaj Mäkelä, Dari Trendafilov. 265-272 [doi]
- Automated sip detection in naturally-evoked videoRana El Kaliouby, Mina Mikhail. 273-280 [doi]
- Perception of low-amplitude haptic stimuli when bikingToni Pakkanen, Jani Lylykangas, Jukka Raisamo, Roope Raisamo, Katri Salminen, Jussi Rantala, Veikko Surakka. 281-284 [doi]
- TactiMote: a tactile remote control for navigating in long listsMuhammad Tahir, Gilles Bailly, Eric Lecolinet, Gérard Mouret. 285-288 [doi]
- The DIRAC AWEAR audio-visual platform for detection of unexpected and incongruent eventsJörn Anemüller, Jörg-Hendrik Bach, Barbara Caputo, Michal Havlena, Jie Luo, Hendrik Kayser, Bastian Leibe, Petr Motlícek, Tomás Pajdla, Misha Pavel, Akihiko Torii, Luc J. Van Gool, Alon Zweig, Hynek Hermansky. 289-292 [doi]
- Smoothing human-robot speech interactions by using a blinking-light as subtle expressionKotaro Funakoshi, Kazuki Kobayashi, Mikio Nakano, Seiji Yamada, Yasuhiko Kitamura, Hiroshi Tsujino. 293-296 [doi]
- Feel-good touch: finding the most pleasant tactile feedback for a mobile touch screen buttonEmilia Koskinen, Topi Kaaresoja, Pauli Laitinen. 297-304 [doi]
- Embodied conversational agents for voice-biometric interfacesÁlvaro Hernández Trapote, Beatriz López-Mencía, David Díaz Pardo de Vera, Rubén Fernández Pozo, Javier Caminero. 305-312 [doi]