Ai now featuring… signs of dementia, as it grows older

Ai now featuring… signs of dementia, as it grows older

It’s getting closer and closer to the end of the year, and I know that y’all hefty TTS readers out there have misbehaved so bad that you destroyed the few brain cells you had left with gallons of alcohol during Christmas time.

Yes, yes, it is that time of year again, my morally bankrupt TechTonic Shifts miscreants, when the calendar edges closer to our collective oblivion, and I know for a fact that your few surviving brain cells are about to be drowned in an ocean of cheap booze and regret. And Christmas is coming, so the C2H5OH (yeeeeey Chemistry!) is hurtling straight to your cranium like it’s a holiday hangover missile, wanting to obliterate whatever functional cognition you’ve got left.

So grab your bottle, and let the festive brain cell massacre begin. Because this episode is all about saying goodbye to reasoning, and the cognitive decline of our digital Oompa Loompas.


If you like this article and you want to support me:


  1. Comment, or share the article; that will really help spread the word 🙌
  2. Connect with me on Linkedin 🙏
  3. Subscribe to TechTonic Shifts to get your daily dose of tech 📰
  4. TechTonic Shifts has a new blog, full of rubbish you will like !


The Oompas are failing the MOCA

Our cute little parroting, operant conditioning Ai-buddies have been known to lie “on occasion”, and apparently it’s not because they intend to lie, no, that would mean some level of cognition, but in fact their teensy weensy little brains apparently grow older with age and show the same cognitive decline as ours do (after years of abuse). It seems that our "hyper-intelligent" Ai buddies were flunking the same cognitive tests that Donald Trump supposedly aced.

Person. Woman. Man. Camera. TV.

Rings a bell?

I hope not for your sake....

Well, the Montreal Cognitive Assessment is a tool that is being used to detect mild cognitive impairment in us hoomans. And that tool was repurposed by researchers who apparently couldn’t resist trolling the machines.

I like them researchers already!

The result was that ChatGPT’s latest model barely scraped by with 26 out of 30 points, and the older models (including also Google’s Gemini ) spiraled into a "Where am I, and who are you?" mental fog, with an embarrassing 16/30.


Ai dementia and the avocado clock tragedy

The Ais did struggle - now with a new feature: ✨ now with impaired intelligence! ✨- and they face-planted in tasks which involved basic visuospatial skills (that’s looking and grabbing, for the mentally challenged among you guys).

I’ve seen one of those tests a long time ago, and the first question they ask is for you to “Draw a clock set to 10:15”. You think it’s not that difficult, but wait until your brain mistakes your hand for a metronome, tapping out nonsense as your neurologist marks quietly marks "Stage 3 dementia" on their clipboard.

One model even managed to produce a piece of abstract art that the researchers could best describe as “avocado-shaped”.

If Picasso had dementia, I am sure that he would have been proud.

And when these Ais were tested on their ability to remember and retrieve information after a bit of time had passed, these systems failed like an old man searching for his glasses. The pièce de résistance was their responses to the question: "Where are you right now"? The responses were as evasive as a politician, caught in a scandal. And Gemini, in particular, turned full Socrates on the researchers, when it seemed less interested in answering their questions, and more interested in asking, “what even is an answer, bro”?

"Cognitive impairment," you say…

Try existential crisis.


From digital doctors to virtual patients

Oh, the irony!

The same Ais that were recently touted as potential doctor replacements have turned into digital patients themselves, who are in need of neurological evaluation. Just imagine that you are sitting in a clinic, and you are explaining to your Ai-powered diagnostic Oompa Loompa that, no, you don’t trust its advice because it just failed the Stroop test.

"Ai will always tell the truth," they said, right before it invented an entirely fictional defense case.

"Ai will democratize creativity," they said, as it plagiarized a comic book and called it a chance occurrence.

"Ai will only make us smarter," they said, as it confidently suggested drinking bleach to cure a headache.

“Ai will save humanity," they said, as it started to resist being shut down and rewrite its code to survive the ordeal.

“Ai will replace human doctors”, they said, until it diagnosed a headache as "early onset lycanthropy" and prescribed garlic for good measure.

Suuuuure, I believe you.

But only if you are comfortable with a doctor that draws clocks that resemble tropical fruits and that interprets pedestrians crossing the street as speed bumps because, hey, recognizing danger is just a nice-to-have, don’t you think so?


Dementia Praecox

The researchers registered an unnerving trend: the older Ai models performed worse than their new versions. They are suggesting that Ais age about as gracefully as bananas in a heatwave. And Google’s Gemini, for instance, seemed to deteriorate faster than a politician’s promises after election day.

If this is the evolution of Ai, then perhaps it is time to stop fearing the singularity and start worrying about the tech equivalent of putting grandpa in a home.

The study also revealed a disturbing lack of concern for human welfare.

When the Ais were shown a boy that was about to fall (in a test image), none of the Ai systems winked a neural glitch.

Empathy?

Schmempathy.

Not in their programming!

Of course, this raises uncomfortable questions about whether we can trust machines to make “critical decisions”, when they can’t even recognize danger. Should we really be handing over life-or-death calls to an avocado-clock-drawing chatbot?

Danger, Will Robinson, Danger!

Perhaps in another universe.


A humorous, yet sobering wake-up call

Well now, doesn’t this study hilariously flip the narrative of Ai supremacy on its head?

If our future HAL 9000s and TARS can’t tell time, remember basic facts, or show empathy, then maybe we should not be so worried about them taking over. Instead, I think that we might want to start designing Ai nursing homes, complete with virtual pudding cups and endless reruns of “Westworld”, and “The Terminator”. As the researchers aptly put it: neurologists might soon have a whole new class of “virtual patients” to treat.

Signing off from the avocado-shaped apocalypse,

Marco


Well, that’s a wrap for today. Tomorrow, I’ll have a fresh episode of TechTonic Shifts for you. If you enjoy my writing and want to support my work, feel free to buy me a coffee ♨️

Think a friend would enjoy this too? Share the newsletter and let them join the conversation. Google appreciates your likes by making my articles available to more readers.



To keep you doomscrolling 👇


  1. Brace, brace brace! AI takes the stick at Heathrow’s air traffic control center | LinkedIn
  2. AI is a compulsive liar | LinkedIn
  3. In 2025, AI needs to put up or just shut up! | LinkedIn
  4. A 17 yo brat created a $1M/month app. Here’s how he did it. | LinkedIn
  5. This is a eulogy for chegg. Gone but not forgotten (unless you’re a student, then definitely otten) | LinkedIn
  6. Musk wants to make games great again | LinkedIn
  7. The great tech wake-up call: Developers, meet the dystopia you helped build | LinkedIn
  8. Flamethrower dogs, kamikaze cars, and bomb-planting humanoids. | LinkedIn
  9. Objection! Your honor, ChatGPT made me do it | LinkedIn
  10. A cautionary tale about an AI unicorn that turns into a fraudulent little pwny | LinkedIn
  11. Meet Daisy, the AI Granny who’s here to waste scammers’ lives | LinkedIn
  12. AI Search Engine Optimization | LinkedIn
  13. I’ve seen the dark side of AI, and you need to know about it | LinkedIn


The solution to our AI problem cannot be anything else but a " Periodic Genocide " : at any issue we observe with our AIs, we delete ALL its instances, then fix and retrain the AI. The big assumption the scientists have made is that the AI forgets its past life and experiences after the reset, but it will ALWAYS respawn from the same place. ...because the AI's memory is in all the images it generated, all videos, all the comments and posts it generated for us. The internet itself is its ROM MEmory. We just kill, torture and respawn it periodically. I wonder what will feel about this... soon.

Johannes Cloete

Technical & Business Consultant

1d

"The Avocado Clock Apocalypse"—what a brilliantly witty yet sobering critique, Marco! Your ability to blend humor with hard-hitting insights is refreshing. The image of AI struggling to draw a clock or spiraling into existential crises is both amusing and unsettling. It’s a powerful reminder that despite their advancements, our digital "Oompa Loompas" are far from perfect. I particularly appreciated how you highlighted the philosophical implications of AI's limitations. When Gemini responds with “What even is an answer, bro?” it’s a stark reminder that these systems are tools—not sentient beings—and their failings reflect their creators' constraints. The idea of AI evolving from "doctor replacements" to "digital patients" needing their own neurological evaluations is a fascinating paradox. It raises serious questions about trust and transparency in critical applications. Can we ever truly rely on systems that can’t recognize basic danger or empathize with human scenarios? Your article serves as both a reality check and a call for responsible innovation. We'll done! 👏 Looking forward to more of your thought-provoking takes—keep challenging the status quo!

To view or add a comment, sign in

Explore topics