Can You Trust Your Robot? A Psychologist's Guide to AI Design
In the not-so-distant future, the sight of humanoid robots in our homes, workplaces, and public spaces will be as common as smartphones. The rapid advancements in artificial intelligence (AI) and robotics over the last few years have propelled us towards this reality. Yet, as AI systems have evolved from backend algorithms to conversational agents like GPT, which some of engage with daily, the next frontier presents a unique challenge—humanoid robots. And this time, the psychology of trust plays a far more intricate role.
Let’s look at my current AI “relationships”. I converse with my ChatGPT app multiple times a day. Do I trust it/her/him? Mostly, yes. How I arrive at that judgment of trust is relatively straightforward. First, the interaction over a chat interface is a format I’m accustomed to, from texting, instant messaging, and searching on Google. The exchange of information feels safe, distant even. And second, I’m evaluating the trustworthiness of ChatGPT based on the content I’m given and how accurately it fulfills my requests.
There’s no real need for emotional processing because, in essence, it’s just text coming from a faceless, neutral entity. But what happens when we shift from screen to physical form? When AI moves into our personal space as a humanoid figure, it’s not just the information we scrutinize— it’s everything about them. Their movements, facial expressions, and other nonverbal cues.
Did that bot just give me cut-eye when I asked it to take out the trash? The future is here. And it’s weird.
Humanoid robots like Tesla’s Optimus, or EVE and NEO, represent this new phase. To be honest, some of these new humanoid robots give me the creeps, and I’m not alone in that reaction.
So, it begs the question: Is there a way to prevent us from getting the heebie-jeebies when interacting with these bots? And what can companies do to ensure their AI and bot products are perceived as safe, trusted and, therefore, more likely to be adopted by the masses?
The Uncanny Valley – the human brain
There’s a psychological phenomenon known as the “uncanny valley.” When an object—such as a robot—becomes eerily human-like but isn’t quite a full human, our brains go into a state of discomfort. Initially coined by roboticist Masahiro Mori, the uncanny valley suggests that humans feel a sense of unease when faced with objects that look almost human but are just different enough to trigger a state of dissonance. Our brains subconsciously register the mismatch, and the result is that “creepy” feeling many of us experience.
So why does this happen? It’s rooted in our evolutionary psychology. Our brains have evolved to process human-like features as part of our social cognition. We’ve developed a highly tuned ability to interpret subtle facial expressions, body language, and movements in other humans to understand their intentions and trustworthiness. When something mimics those cues imperfectly, as is the case with many humanoid robots, our brain flags it as something “off,” resulting in discomfort or distrust.
A study a few years back by Professor David DeSteno and his colleagues confirmed a series of predictions: The nonverbal cues that signal distrust in humans are perceived in the same manner as when they’re done by a cute Robot named Nexi. Nexi, a small humanoid robot, was designed to engage in a series of combinations of body movements, facial expressions, and gaze direction associated with distrust. They found that even though Nexi doesn’t look fully human, its capacity to simulate human behavior triggered the same distrusting response. Whether a human or robot, the combination of the following four gestures led to more distrust in conversation partners: arms crossed, face touch, hand touch, and a lean back.
Recommended by LinkedIn
The takeaway here is profound: if humanoid robots are to become a regular part of our lives, they need to be designed with our social cognition in mind. We must be able to read them, trust them, and—at the very least—not feel unsettled in their presence.
The Dilemma: Cute, cartoonish, or hyper-real?
So, what’s the solution for designers? Do we keep these robots on the friendly, cartoonish side of the uncanny valley? Perhaps. Nexi, for instance, manages to be functional and cute—far from a hyper-realistic human, and thus more easily trusted, the science might argue. On the other hand, some companies like Boston Dynamics have opted to make their robots distinctly machine-like, avoiding humanoid features altogether. Their robots move like humans but look like machines, which helps skirt the uncanny valley altogether.
However, as technology continues to improve, the temptation to create more human-like robots will grow. And here’s where the risk lies: the closer a robot resembles a human without crossing that final threshold of realism, the more disconcerting it becomes. Robots that look human but don’t quite act, move, or express themselves as humans trigger distrust and discomfort, often more so than robots that look completely non-human. The uncanny valley is a tricky place, and the stakes are high for companies designing robots that will inhabit our personal spaces.
Designing for trustworthiness
To ensure humanoid robots are welcomed into our lives rather than feared or mistrusted, technology companies must work closely with psychologists and behavioral scientists. Understanding the underlying psychology of trust is critical. Trust isn’t just about functionality—it’s about how well the robot can mimic the social cues we expect from other humans.
Do we give them heads and limbs, or do we make them less human-like to avoid negative perception? Do we design robots that are more like Nexi, cartoonish and cute, or do we push the boundaries of hyper-realism and risk plunging into the uncanny valley? Are there any upsides in aiming to build robots that, eventually, might be indistinguishable from a human?
Ultimately, the future of humanoid AI hinges on these design decisions. If done right, these robots can become trusted companions and helpers in our homes and workplaces. If done poorly, they might remain objects of suspicion, unsettling us in ways that are difficult to shake. The challenge is not just about advancing technology—it’s about navigating the complex psychology of human-robot interaction.