default search action
Samer Al Moubayed
Person information
Other persons with a similar name
SPARQL queries
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2010 – 2019
- 2018
- [c38]Hans-Jörg Vögel, Christian Süß, Thomas Hubregtsen, Elisabeth André, Björn W. Schuller, Jérôme Härri, Jörg Conradt, Asaf Adi, Alexander Zadorojniy, Jacques M. B. Terken, Jonas Beskow, Ann Morrison, Kynan Eng, Florian Eyben, Samer Al Moubayed, Susanne Muller, Nicholas Cummins, Viviane S. Ghaderi, Ronee Chadowitz, Raphaël Troncy, Benoit Huet, Melek Önen, Adlen Ksentini:
Emotion-Awareness for Intelligent Vehicle Assistants: A Research Agenda. SEFAIAS@ICSE 2018: 11-15 - 2016
- [c37]Hanchuan Li, Peijin Zhang, Samer Al Moubayed, Shwetak N. Patel, Alanson P. Sample:
ID-Match: A Hybrid Computer Vision and RFID System for Recognizing Individuals in Groups. CHI Extended Abstracts 2016: 7 - [c36]Hanchuan Li, Peijin Zhang, Samer Al Moubayed, Shwetak N. Patel, Alanson P. Sample:
ID-Match: A Hybrid Computer Vision and RFID System for Recognizing Individuals in Groups. CHI 2016: 4933-4944 - [c35]Priyanshu Agarwal, Samer Al Moubayed, Alexander Alspach, Joohyung Kim, Elizabeth J. Carter, Jill Fain Lehman, Katsu Yamane:
Imitating human movement with teleoperated robotic head. RO-MAN 2016: 630-637 - 2015
- [c34]Jill Fain Lehman, Samer Al Moubayed:
Mole Madness - A Multi-Child, Fast-Paced, Speech-Controlled Game. AAAI Spring Symposia 2015 - [c33]Samer Al Moubayed, Jill Lehman:
Design and Architecture of a Robot-Child Speech-Controlled Game. HRI (Extended Abstracts) 2015: 79-80 - [c32]Theodora Chaspari, Samer Al Moubayed, Jill Fain Lehman:
Exploring Children's Verbal and Acoustic Synchrony: Towards Promoting Engagement in Speech-Controlled Robot-Companion Games. INTERPERSONAL@ICMI 2015: 21-24 - [c31]Samer Al Moubayed, Jill Lehman:
Toward Better Understanding of Engagement in Multiparty Spoken Interaction with Children. ICMI 2015: 211-218 - [c30]Samer Al Moubayed, Jill Lehman:
Regulating Turn-Taking in Multi-child Spoken Interaction. IVA 2015: 363-374 - 2014
- [j6]Andreas Persson, Samer Al Moubayed, Amy Loutfi:
Fluent Human-Robot Dialogues About Grounded Objects in Home Environments. Cogn. Comput. 6(4): 914-927 (2014) - [c29]Samer Al Moubayed, Jonas Beskow, Bajibabu Bollepalli, Joakim Gustafson, Ahmed Hussen Abdelaziz, Martin Johansson, Maria Koutsombogera, José David Águas Lopes, Jekaterina Novikova, Catharine Oertel, Gabriel Skantze, Kalin Stefanov, Gül Varol:
Human-robot collaborative tutoring using multiparty multimodal spoken dialogue. HRI 2014: 112-113 - [c28]Samer Al Moubayed, Jonas Beskow, Gabriel Skantze:
Spontaneous spoken dialogues with the furhat human-like robot head. HRI 2014: 326 - [c27]Samer Al Moubayed, Dan Bohus, Anna Esposito, Dirk Heylen, Maria Koutsombogera, Harris Papageorgiou, Gabriel Skantze:
UM3I 2014: International Workshop on Understanding and Modeling Multiparty, Multimodal Interactions. ICMI 2014: 537-538 - [c26]Maria Koutsombogera, Samer Al Moubayed, Bajibabu Bollepalli, Ahmed Hussen Abdelaziz, Martin Johansson, José David Águas Lopes, Jekaterina Novikova, Catharine Oertel, Kalin Stefanov, Gül Varol:
The Tutorbot Corpus ― A Corpus for Studying Tutoring Behaviour in Multiparty Face-to-Face Spoken Dialogue. LREC 2014: 4196-4201 - [e2]Samer Al Moubayed, Dan Bohus, Anna Esposito, Dirk Heylen, Maria Koutsombogera, Harris Papageorgiou, Gabriel Skantze:
Proceedings of the 2014 Workshop on Understanding and Modeling Multiparty, Multimodal Interactions, UM3I@ICMI 2014, Istanbul, Turkey, November 16, 2014. ACM 2014, ISBN 978-1-4503-0652-2 [contents] - 2013
- [j5]Nicole Mirnig, Astrid Weiss, Gabriel Skantze, Samer Al Moubayed, Joakim Gustafson, Jonas Beskow, Björn Granström, Manfred Tscheligi:
Face-to-Face with a Robot: What do we actually Talk about? Int. J. Humanoid Robotics 10(1) (2013) - [j4]Samer Al Moubayed, Gabriel Skantze, Jonas Beskow:
The furhat Back-Projected humanoid Head-Lip Reading, gaze and Multi-Party Interaction. Int. J. Humanoid Robotics 10(1) (2013) - [c25]Samer Al Moubayed:
Towards rich multimodal behavior in spoken dialogues with embodied agents. CogInfoCom 2013: 817-822 - [c24]Samer Al Moubayed, Jonas Beskow, Bajibabu Bollepalli, Ahmed Hussen Abdelaziz, Martin Johansson, Maria Koutsombogera, José David Águas Lopes, Jekaterina Novikova, Catharine Oertel, Gabriel Skantze, Kalin Stefanov, Gül Varol:
Tutoring Robots - Multiparty Multimodal Social Dialogue with an Embodied Tutor. eNTERFACE 2013: 80-113 - [c23]Samer Al Moubayed, Jonas Beskow, Gabriel Skantze:
The furhat social companion talking head. INTERSPEECH 2013: 747-749 - [c22]Samer Al Moubayed, Jens Edlund, Joakim Gustafson:
Analysis of gaze and speech patterns in three-party quiz game interaction. INTERSPEECH 2013: 1126-1130 - [p1]Jens Edlund, Samer Al Moubayed, Jonas Beskow:
Co-present or Not? Eye Gaze in Intelligent User Interfaces 2013: 185-203 - 2012
- [b1]Samer Al Moubayed:
Bringing the avatar to life: Studies and developments in facial communication for virtual agents and robots. Royal Institute of Technology, Stockholm, Sweden, 2012 - [j3]Samer Al Moubayed, Jens Edlund, Jonas Beskow:
Taming Mona Lisa: Communicating gaze faithfully in 2D and 3D facial projections. ACM Trans. Interact. Intell. Syst. 1(2): 11:1-11:25 (2012) - [c21]Samer Al Moubayed, Gabriel Skantze:
Perception of gaze direction for situated interaction. GazeIn@ICMI 2012: 3:1-3:6 - [c20]Gabriel Skantze, Samer Al Moubayed:
IrisTK: a statechart-based toolkit for multi-party face-to-face interaction. ICMI 2012: 69-76 - [c19]Samer Al Moubayed, Gabriel Skantze, Jonas Beskow, Kalin Stefanov, Joakim Gustafson:
Multimodal multiparty social interaction with the furhat head. ICMI 2012: 293-294 - [c18]Samer Al Moubayed, Gabriel Skantze, Jonas Beskow:
Lip-Reading: Furhat Audio Visual Intelligibility of a Back Projected Animated Face. IVA 2012: 196-203 - [c17]Mats Blomberg, Gabriel Skantze, Samer Al Moubayed, Joakim Gustafson, Jonas Beskow, Björn Granström:
Children and adults in dialogue with the robot head Furhat - corpus collection and initial analysis. WOCCI 2012: 87-91 - 2011
- [c16]Samer Al Moubayed, Simon Alexandersson, Jonas Beskow, Björn Granström:
A robotic head using projected animated faces. AVSP 2011: 71 - [c15]Samer Al Moubayed, Gabriel Skantze:
Turn-taking control using gaze in multiparty human-computer dialogue: effects of 2d and 3d displays. AVSP 2011: 99-102 - [c14]Jonas Beskow, Simon Alexandersson, Samer Al Moubayed, Jens Edlund, David House:
Kinetic data for large-scale analysis and modeling of face-to-face conversation. AVSP 2011: 107-110 - [c13]Samer Al Moubayed, Jonas Beskow, Gabriel Skantze, Björn Granström:
Furhat: A Back-Projected Human-Like Robot Head for Multiparty Human-Machine Interaction. COST 2102 Training School 2011: 114-130 - [c12]Jens Edlund, Samer Al Moubayed, Jonas Beskow:
The Mona Lisa Gaze Effect as an Objective Metric for Perceived Cospatiality. IVA 2011: 439-440 - [e1]Giampiero Salvi, Jonas Beskow, Olov Engwall, Samer Al Moubayed:
Auditory-Visual Speech Processing, AVSP 2011, Volterra, Italy, September 1-2, 2011. ISCA 2011 [contents] - 2010
- [c11]Samer Al Moubayed, Jonas Beskow, Jens Edlund, Björn Granström, David House:
Animated Faces for Robotic Heads: Gaze and Beyond. COST 2102 Conference 2010: 19-35 - [c10]Samer Al Moubayed, Jonas Beskow, Björn Granström, David House:
Audio-Visual Prosody: Perception, Detection, and Synthesis of Prominence. COST 2102 Training School 2010: 55-71 - [c9]Jonas Beskow, Samer Al Moubayed:
Perception of gaze direction in 2D and 3D facial projections. FAA 2010: 24 - [c8]Jonas Beskow, Samer Al Moubayed:
Perception of nonverbal gestures of prominence in visual speech animation. FAA 2010: 25 - [c7]Samer Al Moubayed, Gopal Ananthakrishnan:
Acoustic-to-articulatory inversion based on local regression. INTERSPEECH 2010: 937-940 - [c6]Samer Al Moubayed, Jonas Beskow:
Prominence detection in Swedish using syllable correlates. INTERSPEECH 2010: 1784-1787
2000 – 2009
- 2009
- [j2]Giampiero Salvi, Jonas Beskow, Samer Al Moubayed, Björn Granström:
SynFace - Speech-Driven Facial Animation for Virtual Speech-Reading Support. EURASIP J. Audio Speech Music. Process. 2009 (2009) - [j1]Samer Al Moubayed, Jonas Beskow, Björn Granström:
Auditory visual prominence. J. Multimodal User Interfaces 3(4): 299-309 (2009) - [c5]Samer Al Moubayed, Jonas Beskow:
Effects of visual prominence cues on speech intelligibility. AVSP 2009: 43-46 - [c4]Jonas Beskow, Giampiero Salvi, Samer Al Moubayed:
Synface - verbal and non-verbal face animation from audio. AVSP 2009: 169 - [c3]Samer Al Moubayed, Jonas Beskow, Anne-Marie Öster, Giampiero Salvi, Björn Granström, Nic van Son, Ellen Ormel:
Virtual speech reading support for hard of hearing in a domestic multi-media setting. INTERSPEECH 2009: 1443-1446 - 2008
- [c2]Samer Al Moubayed, Michaël De Smet, Hugo Van hamme:
Lip synchronization: from phone lattice to PCA eigen-projections using neural networks. INTERSPEECH 2008: 2016-2019 - [c1]Jonas Beskow, Björn Granström, Peter Nordqvist, Samer Al Moubayed, Giampiero Salvi, Tobias Herzke, Arne Schulz:
Hearing at home - communication support in home environments for hearing impaired persons. INTERSPEECH 2008: 2203-2206
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-01-09 13:03 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint