arrow_upward

IMPARTIAL NEWS + INTELLIGENT DEBATE

search

SECTIONS

MY ACCOUNT

My boyfriend sent me a heartfelt message - then I learned he'd used ChatGPT

When Avery's boyfriend sent her a long apology validating everything she was feeling, she was astonished. Then she realised that AI had done all the work for him

Article thumbnail image
Humans can only identify AI text 53% of the time, studies show
cancel WhatsApp link bookmark Save
cancel WhatsApp link bookmark

As is the case with most arguments, it was stupid until it wasn’t. In January 2024, 22-year-old Boston University student Avery* was upset after a night out – some bouncers had been rude to her and her friends. Seeking support, she told her boyfriend of two years about the incident but became increasingly upset when he dismissed her feelings. The couple argued the next morning but Avery’s boyfriend had to go to work, so she continued the fight over text.

“I sent this long text explaining exactly what I felt and why I was upset,” Avery says. She even asked her boyfriend to read over her message carefully before responding, because she felt he had a tendency to overlook things written in texts. Fifteen minutes later, “he sent this long apology, apologising for everything, validating everything I was feeling.” Avery was astonished.

“Wow,” she remembers thinking, “That was really nice. He really read what I said for once.”

It was nice until it wasn’t. For a few minutes, Avery felt content and she thanked her boyfriend for the message and was pleased that he could finally understand her perspective. Then she re-read his apology and thought, “Hold on. This is too good to be true.”

Avery put her original message into the artificial intelligence model ChatGPT. She then asked the AI chatbot to generate an apology in response to her text. She was “dumbfounded” when it spat out a response “almost exactly the same” as the message her boyfriend had sent. When confronted, he admitted it was true – he had used ChatGPT to apologise.

He isn’t the only one. In October 2023, it was alleged that the publishers of the poorly-reviewed and inherently broken video game The Lord of the Rings: Gollum used ChatGPT to write an apology to customers. That same year, staff at Peabody College at Vanderbilt University employed the chatbot to write a condolence email after a mass shooting at another university.

Artificial intelligence researchers have found that GPT-4 (the model behind the paid version of ChatGPT), can infer a human’s mental state and provide nuanced advice. Arguably, therefore, it can be used to navigate complex social situations. The question is: should we let it?

“It made me feel a little bit like a fool,” Avery says, “It just felt disrespectful to my time, to our relationship.” She believes the situation would’ve been different if her boyfriend had “written out all of his own thoughts and opinions on the matter into ChatGPT” and asked the app to help him articulate his stance. But instead: “He literally just took my message and copied and pasted it and had a robot write a response to his girlfriend.”

Aaron is a 50-year-old property manager from Oregon who regularly uses ChatGPT to edit emails to tenants and building owners. In the spring of 2023, Aaron went on a pub crawl and accidentally took the only set of car keys, leaving his wife of 24 years stranded. He asked ChatGPT to draft an apology; “Sweetheart, I’m sorry,” it began. The AI understood that Aaron’s wife must’ve felt “neglected and disrespected”. It promised “to be more considerate” in the future.

Aaron sent the message but ultimately doesn’t think it “helped at all.” What actually solved the problem was getting his son to come and pick up the keys.

Ultimately, his wife wasn’t surprised or hurt to find out that ChatGPT wrote the apology, because the couple had played around with the tool before (for example, Aaron had previously asked it to generate an over-the-top, three-page request asking his wife if he could go to the movies with his friends). He thinks the AI-generated apology provided “something funny to laugh about” once the argument died down.

Still, that doesn’t mean Aaron wouldn’t seriously use ChatGPT in this way. He believes that AI can be used to “keep you from saying something that’s misunderstood” and help you “have a response that isn’t coloured by emotion.”

“It gives you the ability to come up with a measured, intelligent response in a very short period of time,” Aaron says. “If I could give ChatGPT to my younger self and say, ‘You need to run everything you say through this machine before you send it to anybody’, that would probably [have saved] me a lot of trouble in my past.”

Can Aaron understand how this might come across as dishonest? “It comes down to that relationship,” he says as some people might think they’ve been lied to, but others might think “you cared enough to make sure you were saying the right things.”

When Avery told her parents about her boyfriend’s AI-generated apology, they had very different responses. Her mum thought it was “messed up” but her dad thought her boyfriend was “just trying to make [her] happy.”

“I suspect we will see more incidences of this type of usage as more people become familiar with AI tools,” says Dawn Branley-Bell, an associate professor of cyberpsychology at Northumbria University. Branley-Bell points out that there’s a difference between using AI to learn how to apologise versus using AI to apologise on your behalf.

“AI tools can be very beneficial in helping individuals learn how to articulate their words more effectively and empathetically,” she says, noting that neurodiverse individuals can also use AI to interpret meaning.

“Careful use of AI in these contexts can help to alleviate anxiety and cognitive burden.” But, she adds: “More problematic use can occur when individuals – or organisations as we have seen in some instances – use AI as a shortcut.”

Ultimately Branley-Bell believes, “the ethical distinction centres on whether AI is used to aid and refine the users’ own thoughts and words, or as a substitute for effort.” She notes that if people generate apologies without actually reflecting and learning, then they are unlikely to be able to improve their relationships in the long-term.

Does using AI like this ultimately make us less human? Some might argue it makes us more so, after all, ChatGPT could help us understand other people and even ourselves better. It’s clear that the idea will remain fundamentally distasteful and alarming to many – and yet equally appealing to many others. Perhaps, “How do you feel about AI-apologies?” needs to become a regular question on dating apps.

So, how can you tell if someone has artificially apologised? If their message is overly formal or uses words and spellings they don’t normally use, then they might have employed ChatGPT.

However, academics at Penn State University have found that humans can identify AI text only 53 per cent of the time. Ironically, AI is better at detecting AI. It might be easier to tell when you’re talking to a loved-one and the tone of their messages suddenly changes – but still, you can see how false accusations could arise.

Avery ultimately “brushed off” the ChatGPT incident because she already felt her relationship with her boyfriend was ending – they broke up two weeks later.

Aaron and his wife continue to “joke around a lot” in their relationship; the ChatGPT apology was “par for the course for our type of humour.” But despite continuing to use the tool regularly, Aaron isn’t actually an AI optimist – he fears that soon we won’t be able to tell what’s real and what’s AI-generated. “I’m absolutely positive we’re screwed,” he says, “But in the meantime, you know, it makes my life a little bit easier.”

* Name has been changed

EXPLORE MORE ON THE TOPICS IN THIS STORY

  翻译: