AI is a compulsive liar

AI is a compulsive liar

In this series about AI being a compulsive liar, and that you should not believe what it serves you without doing some checking first, I am going to talk about “how not to get played by the machines”.

Alright, let’s cut the crap.

AI is amazing, yippee-ki-yay,… for sure…it writes my emails, it generates my speeches, it corrects my poor grammar, and it even makes me look smarter than I truly am in front of my boss.

But let me be crystal clear: AI is also a pathological liar.

And if you don’t start treating it like that friend who swears they saw Bigfoot last weekend, you’re setting yourself up for an epic faceplant.

And if you are of the rather naive kind, and you think this is just hyperbole, bloody think again, mate. I’m talking about them bots like ChatGPT, Gemini, and Claude.

These things hallucinate.. we all know that by now, but I have seen them conjure entire alternate realities with the confidence of a bad stand-up comedian bombing on stage.

And if you are using AI for research as I was in the beginning, and taking its answers at face value, you might as well get your facts from a fortune cookie.


If you like this article and you want to support me:


  1. Comment, or share the article; that will really help spread the word 🙌
  2. Connect with me on Linkedin 🙏
  3. Subscribe to TechTonic Shifts to get your daily dose of tech 📰
  4. TechTonic Shifts has a new blog, full of rubbish you will like !


When AI makes stuff up

Let me tell you about a story about AI spouting fiction with the swagger of a snake-oil salesman.

A colleague of mine recently asked Chad (ChatGPT) to name this year’s hottest trends in artificial intelligence.

Sounds simple, right?

Here’s what the chatbot confidently declared:

  • 73% of businesses believe generative AI will boost workforce productivity.
  • 60% of companies think generative AI offers a competitive edge.

Wow, those numbers sound… legit.

Except that they’re not.

The bot was quoting a blog post from "Masters of Code," which itself provided zero proof for these claims.

No citations, no references, just mere vibes.

And even if the blog post had real stats, these “beliefs” are as actionable as horoscopes. Believing something might happen isn’t the same as it, you know, actually happening.


Three studies that prove AI Is a bullsh*t artist

Don’t just take my word for it. Let’s get nerdy with some true smart research:

  1. OpenAI’s own report on GPT models
  2. Stanford’s “TruthfulQA” benchmark
  3. MIT study on factual errors in AI responses


How not to be an AI gullibility statistic

If you are still blindly trusting AI, let me spell it out… your credibility is

😵 🥀

You are doing it wrong.

So here are a few tips to avoid looking like a total idiot when using AI for research:

  1. Demand References
  2. And do click on them, ya n@@b!
  3. Do-as-I-do: Cross-check claims
  4. Be specific in your prompts
  5. Spot hallucinations
  6. Stick to reputable sources
  7. Beware of “belief” metrics
  8. Be ready to defend yourself


Here's a list of research tools that I use:


The legal horror show

Still not convinced?

Let’s revisit Mata v. Avianca, where a New York lawyer used ChatGPT to draft a filing.

Turns out, the chatbot included six completely fake cases, complete with fabricated legal arguments. The judge was not amused. The result was that the case was dismissed, lawyer fined $5,000, and a professional reputation in flames.

Don’t be him.

Source: Mata v. Avianca, Inc., No. 1:2022cv01461 - Document 54 (S.D.N.Y. 2023) :: Justia


Don’t get played by “the machines”

AI is a tool, not an oracle.

Yup, it is great for brainstorming, summarizing, rewriting your email, and making you feel like you’re living in the future. But trust it blindly, and you’re asking for trouble.

Trust AI like you trust your kids who sometimes lie. Check everything, question everything, and never, ever assume it knows more than you do.

The stakes are too high to be lazy. Your credibility is on the line every time you click “submit.” Don’t let some hallucinating chatbot take you down with it.

Signing off - Marco


Well, that's a wrap for today. Tomorrow, I'll have a fresh episode of TechTonic Shifts for you. If you enjoy my writing and want to support my work, feel free to buy me a coffee ♨️

Think a friend would enjoy this too? Share the newsletter and let them join the conversation. LinkedIn appreciates your likes by making my articles available to more readers.

Priyanka Rao

CEO of AI Champions - Reskilling your teams to unlock AI responsibly - Personally trained 3000+ executives, senior leaders, and managers on AI and counting - YPO Member

3w

Seth Wise advice AI is powerful, but trust must be earned with verification.

Like
Reply

A lie implies a conscience. AI is not a lier, but a "fill the gaps with anything" machine. Before these "AI Scientists" tell us they can build a thinking machine, they better define that they are building and how much TFLOPS power they need to "clone" what any human brain can do. They used to tell us the TFLOPS power is the nu number of Neurons multiplied with the number of dendrites. That's 10^14 × 10^4. Only "or" and "and" circuits. Then they discovered more chemical processes, more atoms: nor, xor, nxor appeared and the brain became exponentially more complex. Then they discovered the trillions of nanotubes inside axons and dendrites and their quantum properties... And the power of brain became an uncomputational number, something like [(10^14)^(10^4)] ^(x) where x is the complexity of the quantum variables.

AI is powerful, but it's not perfect. Staying sharp and fact-checking its output is the real key to using it effectively without risking your credibility!

Suffyan Ali

AIOps Enthusiast | Cybersecurity, DevOps, SysOps & AWS Solutions Architecture | Driving IT Operations Transformation through Automation & Cloud

4w

It is wild how AI can sound so confident, even when it’s making things up. Using it for research and ending up with incorrect info could seriously backfire, definitely not something anyone wants to risk! It’s a great reminder to always double-check the details AI gives us and question everything. After all, just because it sounds smart doesn’t mean it’s right! How do you verify the information you get from AI?

Michael Attea

Digital Transformation & Analytics | MBA in Marketing & Analytics

4w

Lol I view the hallucinations and hyperbole to be good however As could you imagine the net impact where what it spit out always was deemed 100% in fact fact? Over time it'd unravel and ripple any mechanisms and foundations driving progress imho - True on the one side knowing you can trust and bet your life on guidance provided by sources can lead to + mxms but - also - if that comes to too strong a % - then get never really learning or understanding the premises and constructs and so forth or even delving into them So believe the hallucinations are a net +

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics