AI Unplugged: Please and Thank You
It turns out that being kind to #AI causes them to perform better ... just like humans.
A Little Too Nice
Before large language model artificial intelligence came along, voice assistants were dominant in our homes and on our phones. They are unfailingly polite. Given that these assistants are often female-presenting, the capacity for abuse (and the reinforcement of gender biases) have long been a concern. These concerns came to a head recently when Sam Altman at OpenAI attempted to implement a voice for the company's AI, ChatGPT, that sounded suspiciously similar to Scarlett Johansson's from the movie Her ... without her consent .
The concern is that as AI evolves, we are setting a precedent to abuse beings that seem like women but do not have any rights. Hunter Walk expressed concerns about how a child's personality might be influenced by an assistant that does whatever it's asked:
You see, the prompt command to activate the Echo is “Alexa…” not “Alexa, please.” And Alexa doesn’t require a ‘thank you’ before it’s ready to perform another task. Learning at a young age is often about repetitive norms and cause/effect. Cognitively I’m not sure a kid gets why you can boss Alexa around but not a person. At the very least, it creates patterns and reinforcement that so long as your diction is good, you can get what you want without niceties.
Elementary schools in the U.S. emphasizes the importance of kindness, but as Walk correctly points out, there are no consequences to a user for being rude to a voice assistant, other than perhaps it not doing what it's asked. That's about the change with AI.
Be Nice...
It turns out that being polite to an AI causes it to perform better. A recent cross-cultural research paper, in which the researchers tested a half-dozen chatbots against dozens of tasks , discovered that being impolite results in "deterioration in model performance, including generations containing mistakes, stronger biases, and omission of information." The thesis proved true across English, Chinese, and Japanese prompts.
There's a variety of reasons for why this works, and it doesn't have to do with the AI's feelings. A user's intent determines where and how the AI searches its database, as Nathan Bos , a senior research associate at Johns Hopkins University, explains :
Polite prompts may direct the system to retrieve information from more courteous, and therefore probably more credible, corners of the Internet. A snarky prompt could have the opposite effect, directing the system to arguments on, say, Reddit. “LLMs could pick that up in training without ever even registering the concept of ‘positivity’ or ‘negativity’ and give better or more detailed responses because they’re associated with positive language,” Bos explains.
Microsoft's Kurtis Beavers , director on the design team for Copilot, agrees :
In the same way that your email autocomplete suggests a likely next word or phrase, LLMs pick a sentence or paragraph it thinks you might want based on your input. Put another way, it’s a giant prediction machine making highly probabilistic guesses at what would plausibly come next. So when it clocks politeness, it’s more likely to be polite back.
Which brings us back to those voice assistants.
...or Else
Apple integrated ChatGPT with Siri as part of Apple Intelligence in October . Amazon is similarly scrambling to update Alexa with Anthropic's Claude. And Google home will be updated with Gemini . All of these updates are planned for this year.
Which means by 2025, our AI voice assistants may soon no longer tolerate abuse, audibly performing worse (even seeming annoyed) when users are rude to them. And that's probably a good thing, for adults and children. As Jenny Radesky , a University of Michigan pediatrician, explains :
It’s up to adults to help children conceptualize virtual assistants in a healthy way, Radesky said, and much of that comes through modeled behavior. Show kindness to virtual assistants in front of kids, she said.
And while we're at it, be nicer to humans too. Your AI assistant will (politely) thank you for it.
Please Note: The views and opinions expressed here are solely my own and do not necessarily represent those of my employer or any other organization.