Human-Centered Artificial Intelligence Lab (Bielefeld University)s cover photo
Human-Centered Artificial Intelligence Lab (Bielefeld University)

Human-Centered Artificial Intelligence Lab (Bielefeld University)

Forschungsdienstleistungen

Bielefeld, North Rhine-Westphalia 154 Follower:innen

Research Group focused on Multimodal Behavior Processing for Human-Centered AI

Info

The research group Multimodal Behavior Processing, headed by Jun.-Prof. Dr. Hanna Drimalla, is dedicated to the automatic analysis of social interaction signals (e.g., facial expression, gaze behavior, voice etc.) using machine learning as well as speech and image processing. Three aspects are the focus of our research: the detection of positive and negative affect, the measurement of stress, and the analysis of social interaction patterns. All three have in common that they are multimodal and time-dependent phenomena. To address this complexity, we collect innovative training data and develop novel analysis methods.

Branche
Forschungsdienstleistungen
Größe
11–50 Beschäftigte
Hauptsitz
Bielefeld, North Rhine-Westphalia
Art
Bildungseinrichtung

Orte

Beschäftigte von Human-Centered Artificial Intelligence Lab (Bielefeld University)

Updates

  • Human-Centered Artificial Intelligence Lab (Bielefeld University) hat dies direkt geteilt

    Profil von Olya Hakobyan anzeigen

    Postdoctoral researcher at Bielefeld University

    📑 🚀 Our new paper in Behavior Research Methods What if your chat history could help research (without sharing a single word)? Online communication generates a wealth of data that could offer valuable insights into human behavior. But this data is mostly out of reach for research and for good reason: privacy and ethics matter. That's where Dona comes in - our data donation platform developed together with Paul-Julius Hillmann, Florian Martin, Erwin Böttinger and Hanna Drimalla. 📌 With Dona, people can donate their de-identified chat Data from WhatsApp, Facebook and Instagram! De-identified means only anonymized IDs, message lengths and timestamps. No content, no private details. 📌 But is such minimal data still meaningful for scientific research? Our paper shows how even meta-data can be helpful in identifying interaction patterns and temporal dynamics. And it's not just for our advanced analysis! 📌 The participants received personalized visualizations about their messaging behavior and found them insightful. Most spotted major life events just by looking at simple graphs 📈 📊 📌 Dona is suited for sensitive research, like understanding communication patterns in mental health research while fully protecting participant privacy. Participants benefit too by receiving personalized visualizations of their messaging habits. 🌍 Everything is open-access—the tool, the dataset (for researchers), the analysis code and the paper. Check it out: https://lnkd.in/e942Wk9V

  • 🚀 Join our Human-Centered AI Lab at Universität Bielefeld as a PhD candidate! If you have a background in computer science, data science, or a related field and are passionate about developing AI to analyze human interaction, this is for you. Engage in meaningful research at the intersection of psychology and computer science. 👉 More details & application: https://lnkd.in/eRNCDTAq Know someone who might be interested? Feel free to share or reach out! 🤝

    • Kein Alt-Text für dieses Bild vorhanden
  • 🌍 From Bielefeld to Bangalore: Highlights from Bhargav’s research stay! As part of his research stay at the Bangalore Institute of Technology (BIT) in India, Bhargav Acharya recently conducted a data collection study 📊to improve non-invasive methods for his ongoing research on heart rate estimation ❤️. During his time at BIT, Bhargav also gave a talk to students, sharing insights into remote photoplethysmography (rPPG) and and the broader implications of this technology. Thanks to the Bangalore Institute of Technology Institute of Technology, especially Dr.Aswath M U and Dr. Kalpana A B., for their support in facilitating this study. A special thanks to Naveen Kumar for his valuable assistance throughout the project.

    • Kein Alt-Text für dieses Bild vorhanden
  • ✨ Join us for an insightful talk by Prof. Dr. Claudia Müller-Birn (@Freie Universität Berlin) on a human-centered approach to AI system design in healthcare! What to Expect? Prof. Dr. Müller-Birn will share insights into designing AI systems that prioritize human values and needs. Key topics include: 🤝 Human-AI collaboration and its impact on decision-making. 📊 Value-centered dataset creation, focusing on ethical considerations. 🤔 Participatory AI development, where patients, clinicians, and researchers co-design systems. Hosted by: Human-Centered AI Lab of Prof. Dr. Hanna Drimalla together with the SAIL Research Network.

    • Kein Alt-Text für dieses Bild vorhanden
  • In 2024, we sharpened our focus on human-centered AI that works for the people 👨💻👩💻. We presented our research at conferences and events in Glasgow, Manchester, New York, Yerevan, and Stockholm, connecting with researchers worldwide 🌍. A highlight was Hanna Drimalla's appointment as Professor of Human-Centered Artificial Intelligence (HCAI) 🎓, which is now also reflected in our new group name: HCAI Lab. During our team retreat, we reflected on AI that’s transparent, fair, and truly useful for people from all walks of life 🧐 Get ready to dive into fresh scientific adventures with us in 2025 🚀!

    • Kein Alt-Text für dieses Bild vorhanden
  • Human-Centered Artificial Intelligence Lab (Bielefeld University) hat dies direkt geteilt

    Project A06 researchers David Johnson, Jonas Paletschek, and Hanna Drimalla have collaborated with Olya Hakobyan to publish a new paper: "Explainable AI for Audio and Visual Affective Computing: A Scoping Review." 📄 The paper reviews how Explainable AI (#XAI) is being applied to audiovisual affective computing, which uses data such as facial expressions and vocal cues to recognize emotional states. The authors provide an overview of #XAI concepts relevant to affective computing, helping to shed light on how machine learning models in this domain make decisions. While interest in interpretability of machine learning models is growing, there is still a gap in the research addressing this topic. The review highlights encouraging developments in using XAI for audiovisual affective computing, though the application of methods remains limited. The paper concludes with recommendations for enhancing interpretability in future affective machine learning research. ➡️ You can find a short interview with Dr David Johnson and further information here: https://lnkd.in/eqtQRjK8

    • Kein Alt-Text für dieses Bild vorhanden
  • Our Kids-SIT at MUM Conference! 🎉 Last week at the International Conference on Mobile and Ubiquitous Multimedia (MUM) in Stockholm, our team member William S. presented our demo paper 📝: "Introducing the „Simulated Interaction Task for Children“ (Kids-SIT). The Kids-SIT is a video-based tool to analyze social behavior in children by studying eye gaze 👀, facial expressions 😊 and head movements 🧑💻. ✨ This tool helps standardize how we understand these interactions – valuable for both research and practice! 💡 It was great to share our work, connect with others 🤝, and get feedback! 🙌 Curious to learn more? Check out the paper here 👉 https://lnkd.in/ez4XqQYy 

    • Kein Alt-Text für dieses Bild vorhanden
    • Kein Alt-Text für dieses Bild vorhanden
  • Unternehmensseite für Lamarr-Institut anzeigen

    3.630 Follower:innen

    ⚕️👩⚕️ #AIColloquium on 𝗛𝘂𝗺𝗮𝗻-𝗖𝗲𝗻𝘁𝗲𝗿𝗲𝗱 𝗔𝗜 𝗳𝗼𝗿 𝗠𝗲𝗻𝘁𝗮𝗹 𝗛𝗲𝗮𝗹𝘁𝗵 How can AI revolutionize mental health care while addressing real-world challenges like usability and trustworthiness? Join us for an exciting talk by Prof. Dr. Hanna Drimalla (Universität Bielefeld), hosted by Lamarr and the Research Center Trustworthy Data Science and Security (RC Trust), to explore this vital question. 📅 Date: December 12, 2024 ⏰ Time: 10:15 AM – 11:45 AM 📍 Location: JvF25/3-303 - Conference Room (Lamarr/RC Trust Dortmund) 𝗪𝗵𝘆 𝗮𝘁𝘁𝗲𝗻𝗱? ✅ Gain insights into cutting-edge AI applications for stress detection and psychiatric diagnostics. ✅ Explore real-world solutions integrating computer vision, behavioral signal processing, and multimodal analysis. ✅ Learn how explainable AI builds trust in clinical environments. ✅ Engage directly with Prof. Drimalla, a leading expert in human-centered AI and mental health. 𝗪𝗵𝗮𝘁 𝗺𝗮𝗸𝗲𝘀 𝘁𝗵𝗶𝘀 𝘁𝗮𝗹𝗸 𝘂𝗻𝗶𝗾𝘂𝗲? Prof. Drimalla’s human-centered approach focuses on creating practical, trustworthy AI solutions for mental health care. Her interdisciplinary expertise bridges psychology, AI, and social interaction analysis, offering attendees an inside look at the future of clinical AI. 🎟 Don’t miss this opportunity to explore how AI is transforming mental health care. mark your calendars and be part of this conversation! More ℹ️ https://lnkd.in/d7M_h43V

    • Kein Alt-Text für dieses Bild vorhanden
  • Human-Centered Artificial Intelligence Lab (Bielefeld University) hat dies direkt geteilt

    Profil von Olya Hakobyan anzeigen

    Postdoctoral researcher at Bielefeld University

    🎉 📑 🚀 Exciting news—our review paper has just been accepted by the IEEE Transactions on Affective Computing! But first, let me take you through a playful experiment ⤵️ After a bit of convincing, ChatGPT analyzed the facial expressions in my picture, using descriptions of my eyes, eyebrows, etc. When I asked it to highlight these features in the original image, it generated the result below. I have to admit, I find myself disagreeing with ChatGPT’s assessment! This playful experiment highlights an important challenge in the field of explainable AI (xAI): understanding how AI models make decisions. While this was just a fun exercise, David Johnson, me, Jonas Paletschek and Hanna Drimalla conducted a more rigorous analysis of how xAI is applied in affective computing, e.g. AI applications in emotion recognition. Our review paper highlighs the current state, the promise and limitations specific to this field: https://lnkd.in/eJsnt9ua #xai #affectivecomputing

    • Kein Alt-Text für dieses Bild vorhanden
    • Kein Alt-Text für dieses Bild vorhanden
  • 🎉Once again, we had the pleasure of hosting a guest speaker at our Research Colloquium! Yesterday, Dr. Philipp Müller (Deutsches Forschungszentrum für Künstliche Intelligenz, DFKI) joined us for an engaging talk. 📌In his presentation, “Multi-modal Behaviour Analysis: Bridging the Gap between Computer Science and Psychology,” Dr. Müller explored how bringing together psychology and computer science theories and methods can be very insightful🌍 . He shared interesting insights on human and machine face detection, engagement detection and assessing emotion regulation strategies —highlighting the opportunities of uniting 🧠 psychology and 🤖 computer science.

    • Kein Alt-Text für dieses Bild vorhanden

Ähnliche Seiten