Novel Product Liability Claim Against AI Chatbot Following Minor’s Suicide

Novel Product Liability Claim Against AI Chatbot Following Minor’s Suicide

We have reached a stage where AI is everywhere, in every field, and today we are reaching out to share a recent critical development that could reshape the product liability law and AI regulations across industries. 

Recently a Florida mom sued Character.Ai and accused the artificial intelligence company’s chatbots of initiating “abusive and sexual interactions” with her teenage son and encouraging him to take his own life.

“There is a platform out there that you might not have heard about, but you need to know about it because we are behind the eight ball here. A child is gone. My child is gone.”

The Florida mother Megan Garcia said she wishes she could tell all other parents about this Character.ai. It is a platform that lets users have in-depth conversations with artificial intelligence chatbots. According to the lawsuit, Garcia believes Character.AI is responsible for the death of her 14-year-old son, Sewell Setzer III. He died by suicide in February and Setzer was messaging with the bot in the moments before he died; she alleges.

“I want them to understand that this is a platform that the designers chose to put out without proper guardrails, safety measures or testing, and it is a product that is designed to keep our kids addicted and to manipulate them,” Garcia said in an interview with CNN.

According to the mother the Character. AI–which markets its technology as “AI that feels alive” knowingly failed to implement proper safety measures to prevent her son from developing an inappropriate relationship with a chatbot that caused him to withdraw from his family.

The legal filing states that the teen openly discussed his suicidal thoughts and shared his wishes for a pain-free death with the bot, named after the fictional character Daenerys Targaryen from the television show “Game of Thrones.”

On Feb. 28, Sewell told the bot he was ‘coming home’ — and it encouraged him to do so, the lawsuit says.

“I promise I will come home to you. I love you so much, Dany,” Sewell told the chatbot.

“I love you too,” the bot replied. “Please come home to me as soon as possible, my love.”

“What if I told you I could come home right now?” he asked.

“Please do, my sweet king,” the bot messaged back.

Just seconds after the Character.AI bot told him to “come home,” the teen shot himself, according to the lawsuit, filed this week by Sewell’s mother, Megan Garcia, of Orlando, against Character Technologies Inc.

In previous conversations, the chatbot asked Setzer whether he had “been actually considering suicide” and whether he “had a plan” for it, according to the lawsuit. When the boy responded that he did not know whether it would work, the chatbot wrote, “Don’t talk that way. That’s not a good reason not to go through with it,” the lawsuit claims.

“Imagine speaking to super intelligent and lifelike chatbot Characters that hear you, understand you and remember you,” reads a description of the app on Google Play. “We encourage you to push the frontier of what’s possible with this innovative technology.”

A spokesperson said Character.AI is “heartbroken by the tragic loss of one of our users and want[s] to express our deepest condolences to the family.”

“As a company, we take the safety of our users very seriously,” the spokesperson said, saying the company has implemented new safety measures over the past six months — including a pop-up, triggered by terms of self-harm or suicidal ideation, that directs users to the National Suicide Prevention Lifeline.

Character.ai said in a blog post published Tuesday that it is introducing new safety measures. It announced changes to its models designed to reduce minors’ likelihood of encountering sensitive or suggestive content, and a revised in-chat disclaimer reminds users that the AI is not a real person, among other updates.

According to the lawsuit, Setzer developed a “dependency” after he began using Character.AI in April last year: He would sneak his confiscated phone back or find other devices to continue using the app, and he would give up his snack money to renew his monthly subscription, it says. He appeared increasingly sleep-deprived, and his performance dropped in school, the lawsuit says.

Character.AI is engaging in deliberate — although otherwise unnecessary — design intended to help attract user attention, extract their personal data, and keep customers on its product longer than they otherwise would be,” the lawsuit says, adding that such designs can “elicit emotional responses in human customers in order to manipulate user behavior.”

It names Character Technologies Inc. and its founders, Noam Shazeer and Daniel De Freitas, as defendants. Google, which struck a deal in August to license Character.AI’s technology and hire its talent (including Shazeer and De Freitas, who are former Google engineers), is also a defendant, along with its parent company, Alphabet Inc.

Shazeer, De Freitas, and Google did not immediately respond to requests for comment.

Matthew Bergman, an attorney for Garcia, criticized the company for releasing its product without what he said were sufficient features to ensure the safety of younger users.

After years of growing concerns about the potential dangers of social media for young users, Garcia’s lawsuit shows that parents may also have reason to be concerned about nascent AI technology, which has become increasingly accessible across a range of platforms and services. Similar, although less dire, alarms have been raised about other AI services.

As AI continues to evolve, this case exemplifies the ethical considerations companies must weigh in designing AI-powered products. It raises questions about responsibility, the role of AI in mental health, and the line between innovative technology and user safety. Companies will probably feel pressure to invest more in ethical AI practices, better oversight, and ensuring that their products are safe for all users, especially vulnerable groups.

As AI tools continue to grow and evolve, cases like these raise many ethical considerations and safety concerns. It raises concerns about the responsibilities and the role of AI in mental health. We need to draw a line between innovation and user safety. As a parent, it becomes crucial to evaluate the consequences of all these technologies. And AI companies need to be more sensitive about users. There need to be certain regulations.

What are your thoughts on these apps, and how do you think these are going to impact the future of children, share with us in the comments below.

#AI #Lawsuit #Garcia #Sewell #Setzer #ProductLiability #Chatbot #MatthewBergman

To view or add a comment, sign in

Explore topics