Abstract is missing.
- Language and thought: talking, gesturing (and signing) about spaceJohn B. Haviland. [doi]
- Feedback is... late: measuring multimodal delays in mobile device touchscreen interactionTopi Kaaresoja, Stephen A. Brewster. 2 [doi]
- Learning and evaluating response prediction models using parallel listener consensusIwan de Kok, Derya Ozkan, Dirk Heylen, Louis-Philippe Morency. 3 [doi]
- Real-time adaptive behaviors in multimodal human-avatar interactionsHui Zhang 0006, Damian Fricker, Thomas G. Smith, Chen Yu. 4 [doi]
- Facilitating multiparty dialog with gaze, gesture, and speechDan Bohus, Eric Horvitz. 5 [doi]
- Focusing computational visual attention in multi-modal human-robot interactionBoris Schauerte, Gernot A. Fink. 6 [doi]
- Employing social gaze and speaking activity for automatic determination of the ::::Extraversion:::: traitBruno Lepri, Subramanian Ramanathan, Kyriaki Kalimeri, Jacopo Staiano, Fabio Pianesi, Nicu Sebe. 7 [doi]
- Gaze quality assisted automatic recognition of social contexts in collaborative TetrisWeifeng Li, Marc-Antoine Nüssli, Patrick Jermann. 8 [doi]
- Discovering eye gaze behavior during human-agent conversation in an interactive storytelling applicationNikolaus Bee, Johannes Wagner, Elisabeth André, Thurid Vogt, Fred Charles, David Pizzi, Marc Cavazza. 9 [doi]
- Speak4it: multimodal interaction for local searchPatrick Ehlen, Michael Johnston. 10 [doi]
- A multimodal interactive text generation systemLuis Rodríguez, Ismael García-Varea, Alejandro Revuelta-Martínez, Enrique Vidal. 11 [doi]
- The Ambient Spotlight: personal multimodal search without queryJonathan Kilgour, Jean Carletta, Steve Renals. 12 [doi]
- Cloud mouse: a new way to interact with the cloudChunhui Zhang, Min Wang, Richard Harper. 13 [doi]
- Musical performance as multimodal communication: drummers, musical collaborators, and listenersRichard Ashley. 14 [doi]
- Toward natural interaction in the real world: real-time gesture recognitionYing Yin, Randall Davis. 15 [doi]
- Gesture and voice prototyping for early evaluations of social acceptability in multimodal interfacesJulie Rico, Stephen A. Brewster. 16 [doi]
- Automatic recognition of sign language subwords based on portable accelerometer and EMG sensorsYun Li, Xiang Chen, Jianxun Tian, Xu Zhang, Kongqiao Wang, Jihai Yang. 17 [doi]
- Enabling multimodal discourse for the blindFrancisco Oliveira, Heidi Cowan, Bing Fang, Francis K. H. Quek. 18 [doi]
- Recommendation from robots in a real-world retail shopKoji Kamei, Kazuhiko Shinozawa, Tetsushi Ikeda, Akira Utsumi, Takahiro Miyashita, Norihiro Hagita. 19 [doi]
- Dynamic user interface distribution for flexible multimodal interactionMarco Blumendorf, Dirk Roscher, Sahin Albayrak. 20 [doi]
- 3D-press: haptic illusion of compliance when pressing on a rigid surfaceJohan Kildal. 21 [doi]
- Understanding contextual factors in location-aware multimedia messagingAbdallah El-Ali, Frank Nack, Lynda Hardman. 22 [doi]
- Embedded media barcode links: optimally blended barcode overlay on paper for linking to associated mediaQiong Liu, Chunyuan Liao, Lynn Wilcox, Anthony Dunnigan. 23 [doi]
- Enhancing browsing experience of table and image elements in web pagesWenchang Xu, Xin Yang, Yuanchun Shi. 24 [doi]
- PhotoMagnets: supporting flexible browsing and searching in photo collectionsYa-Xi Chen, Michael Reiter, Andreas Butz. 25 [doi]
- A language-based approach to indexing heterogeneous multimedia lifelogPeng-Wen Chen, Snehal Kumar Chennuru, Senaka Buthpitiya, Ying Zhang. 26 [doi]
- Human-centered attention models for video summarizationKaiming Li, Lei Guo, Carlos Faraco, Dajiang Zhu, Fan Deng, Tuo Zhang, Xi Jiang, Degang Zhang, Hanbo Chen, Xintao Hu, L. Stephen Miller, Tianming Liu. 27 [doi]
- Activity-based Ubicomp: a new research basis for the future of human-computer interactionJames A. Landay. 28 [doi]
- Visual speech synthesis by modelling coarticulation dynamics using a non-parametric switching state-space modelSalil Deena, Shaobo Hou, Aphrodite Galata. 29 [doi]
- Multi-modal computer assisted speech transcriptionLuis Rodríguez, Ismael García-Varea, Enrique Vidal. 30 [doi]
- Grounding spatial language for video searchStefanie Tellex, Thomas Kollar, George Shaw, Nicholas Roy, Deb Roy. 31 [doi]
- Location grounding in multimodal local searchPatrick Ehlen, Michael Johnston. 32 [doi]
- Linearity and synchrony: quantitative metrics for slide-based presentation methodologyKazutaka Kurihara, Toshio Mochizuki, Hiroki Oura, Mio Tsubakimoto, Toshihisa Nishimori, Jun Nakahara. 33 [doi]
- Empathetic video experience through timely multimodal interactionMyunghee Lee, Gerard J. Kim. 34 [doi]
- Haptic numbers: three haptic representation models for numbers on a touch screen phoneToni Pakkanen, Roope Raisamo, Katri Salminen, Veikko Surakka. 35 [doi]
- Key-press gestures recognition and interaction based on SEMG signalsJuan Cheng, Xiang Chen, Zhiyuan Lu, Kongqiao Wang, Minfen Shen. 36 [doi]
- Mood avatar: automatic text-driven head motion synthesisKaihui Mu, Jianhua Tao, Jianfeng Che, Minghao Yang. 37 [doi]
- Does haptic feedback change the way we view touchscreens in cars?Matthew J. Pitts, Gary E. Burnett, Mark A. Williams, Tom Wellings. 38 [doi]
- Identifying emergent leadership in small groups using nonverbal communicative cuesDairazalia Sanchez-Cortes, Oya Aran, Marianne Schmid Mast, Daniel Gatica-Perez. 39 [doi]
- Quantifying group problem solving with stochastic analysisWen Dong, Alex Pentland. 40 [doi]
- Cognitive skills learning: pen input patterns in computer-based athlete trainingNatalie Ruiz, Qian Qian Feng, Ronnie Taib, Tara Handke, Fang Chen. 41 [doi]
- Vocal sketching: a prototype tool for designing multimodal interactionKoray Tahiroglu, Teemu Tuomas Ahmaniemi. 42 [doi]
- Evidence-based automated traffic hazard zone mapping using wearable sensorsMasahiro Tada, Haruo Noma, Kazumi Renge. 43 [doi]
- Analysis environment of conversational structure with nonverbal multimodal dataYasuyuki Sumi, Masaharu Yano, Toyoaki Nishida. 44 [doi]
- Design and evaluation of a wearable remote social touch deviceRongrong Wang, Francis K. H. Quek, James Keng Soon Teh, Adrian David Cheok, Sep Riang Lai. 45 [doi]
- Multimodal interactive machine translationVicent Alabau, Daniel Ortiz-Martínez, Alberto Sanchís, Francisco Casacuberta. 46 [doi]
- Component-based high fidelity interactive prototyping of post-WIMP interactionsJean-Yves Lionel Lawson, Mathieu Coterot, Cyril Carincotte, Benoît Macq. 47 [doi]
- Active learning strategies for handwritten text transcriptionNicolás Serrano, Adrià Giménez, Alberto Sanchís, Alfons Juan. 48 [doi]
- Behavior and preference in minimal personality: a study on embodied conversational agentsYuting Chen, Adeel Naveed, Robert Porzel. 49 [doi]
- Vlogcast yourself: nonverbal behavior and attention in social mediaJoan-Isaac Biel, Daniel Gatica-Perez. 50 [doi]
- 3D user-perspective, voxel-based estimation of visual focus of attention in dynamic meeting scenariosMichael Voit, Rainer Stiefelhagen. 51 [doi]
- Modelling and analyzing multimodal dyadic interactions using social networksSergio Escalera, Petia Radeva, Jordi Vitrià, Xavier Baró, Bogdan Raducanu. 52 [doi]
- Analyzing multimodal time series as dynamical systemsShohei Hidaka, Chen Yu. 53 [doi]
- Conversation scene analysis based on dynamic Bayesian network and image-based gaze detectionSebastian Gorga, Kazuhiro Otsuka. 54 [doi]