Objection! Your honor, ChatGPT made me do it

Objection! Your honor, ChatGPT made me do it

I want you to imagine this scene: a courtroom in Minnesota, a Stanford professor with an impressive title, and a legal document that has been written by your drunk uncle’s predictive text app. Yes, folks, we have hit peak 2024 absurdity. In Q4, our court submissions are being ghostwritten by ChatGPT, and (shocking?) some of it was not entirely accurate.


If you like this article and you want to support me:


  1. Comment, or share the article; that will really help spread the word 🙌
  2. Connect with me on Linkedin 🙏
  3. Subscribe to TechTonic Shifts to get your daily dose of tech 📰
  4. Visit TechTonic Shifts. We have a new blog, full of rubbish you will like !


Meet our star: The Professor who trusted the Matrix

Jeff Hancock is a Stanford professor who is specialized in communication and AI’s role in misinformation (oh, the irony). And he decided to enter a legal battlefield armed with a 12-page expert declaration. His task was to support Minnesota’s attorney general in defending a law that criminalized AI-generated election deepfakes.

….. Even more irony….

His whole case revolved around a citation from a study that was so elusive, that it doesn’t even exist.

Poof. Gone.

Never was.

Classic ChatGPT move: "Fake it ‘til you make it."

This alleged "study", was supposedly produced by the imaginary dream team Huang, Zhang, and Wang. It was supposed to be from a real journal. Except, when the plaintiffs’ lawyers went digging, they found that the journal volume that poor Hancock cited was talking about climate change and election results, and NOT deepfakes.


The plot thickens

Now, before we throw Hancock entirely under the self-driving bus, let’s consider the possibilities.

Answer the following multiple choice question….

Did the professor:

A. Blindly trust ChatGPT to generate his references without double-checking?

B. Get sabotaged by the Minnesota legal team, who slipped the bogus citation into his report like a Trojan horse?

or

C. Decide he was too busy to fact-check because, hey, ChatGPT never lies.

The plaintiffs said that no matter what choice our gullible professor made, he swore under penalty of perjury that he reviewed the materials.

Cue the ominous music.


Déjà vu of lawyers that keep getting burned by ChatGPT

This isn’t ChatGPT’s first f*** up in the legal world though.

Maybe you remember Steven Schwartz. He is the New York lawyer who submitted an entire legal brief which was filled with fabricated cases. He had ChatGPT produce fake judicial opinions and then boldly presented them in court. That worked for about a few seconds until the judge found out and grilled him for hours. Schwartz’s excuse was that he didn’t realize that AI could lie.

Seriously, man?

That's like handing your car keys to your toddler and being surprised when it crashes into your mailbox.

His cringeworthy defense included statements like, “God, I wish I did that, and I didn’t do it”, as if he was auditioning for an infomercial about bad decisions.

Well this one ended with a $5,000 fine and enough humiliation to last for a lifetime.


The bigger question is if a GPT should be your legal intern

Look, an AI is great for writing quirky LinkedIn or Tinder bios or helping you craft sarcastic articles (you’re welcome). But relying on it for legal submissions is like asking a set of tarot cards for legal advice. AI tools are notorious for “hallucinations”, aka making stuff up- or sometimes simply lying - with the confidence of a toddler insisting they didn’t eat the cookies while covered in crumbs.

What’s truly wild is that is a cultural shift. It is not a glitch in the matrix alone. We have reached a point where even experts (definition: people who should know better), are handing over the reins to quirky-reasoning machines and not even take the time to double-check their work.


Don't trust everything with a .AI

For those playing along at home, here’s the moral of the story:

  1. If you’re using a probabilistic AI for anything serious, fact-check it. Twice. With a hooman.
  2. Don’t swear under oath unless you’re 100% sure your sources aren’t made up by a drunk Oompa Loompa.
  3. Maybe AI shouldn’t be your go-to for legal or academic submissions. Just a thought.

As for Hancock, his reputation has definitely a hit, but he can rest assured, knowing that he’s not alone. He is becoming a statistic. The growing pains of AI adoption are becoming a comedy of errors.

Us people, still need to keep our hands on the wheel. Literally.

Signing off - Marco


Well, that's a wrap for today. Tomorrow, I'll have a fresh episode of TechTonic Shifts for you. If you enjoy my writing and want to support my work, feel free to buy me a coffee ♨️

Think a friend would enjoy this too? Share the newsletter and let them join the conversation. LinkedIn appreciates your likes by making my articles available to more readers.

This playful take on AI reminds us of the deepening bond between humans and technology. The humor reflects how integral AI has become in our lives, to the point of becoming an imaginary scapegoat in our "courtroom dramas." It’s a light-hearted way to acknowledge AI's influence while keeping the conversation engaging and relatable.

Johannes Cloete

Technical & Business Consultant

1mo

Marco, this was a brilliantly entertaining yet sobering read. You’ve captured the absurdity of our AI-infused times with wit and precision! It’s wild to think that even the brightest minds are falling for AI’s toddler-level confidence. The courtroom anecdotes are equal parts hilarious and horrifying—especially the thought of experts under oath citing 'studies' dreamed up by ChatGPT. Your comparison to tarot cards is spot on; AI tools might be predictive, but they’re definitely not infallible. I particularly appreciated your point about this being a cultural shift, not just a tech glitch. It highlights how essential it is for us to adapt our critical thinking to match the pace of AI adoption. Moral of the story: Always fact-check the virtual toddler’s work—preferably twice! Looking forward to more gems from TechTonic Shifts. Keep up the great work!

Nikolai Karelin

Head of ML/AI at SilkData | AI Consulting & Architecture | NLP, Document AI | Scientific Computing | Python | Lead Developer | Mentor | PhD in Physics

1mo

It resembles me a book author from 90s, who wrote "it is said in the internet that..." and then some stupid bullshit even without a reference to some source. BTW, these days we had 2nd anniversary of ChatGPT. All is still very new, I think 🤔

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics