AI Unplugged: AI Want a Cracker?
You're not a stochastic parrot. Or are you? #AI
An Octopus' Garden
One of the debates raging around generative artificial intelligence is the tension between a being that presents as a person and the perception of whether or not that matters to the public at large.
Emily M. Bender , a computational linguist at the University of Washington, published a paper titled "Climbing Towards NLU: On Meaning, Form, and Understanding in the Age of Data" in 2020 with fellow computational linguist Alexander Koller , that argued Large Language Models (LLMs) are actually just parrots. Or rather, octopi.
In the paper, Emily imagined two people (A and B) on an island using a telegraph to communicate each other, with an octopus (O) listening in who figured out how, by only using the conversation as reference, to fake B's correspondence. Soon O was talking to A, impersonating B almost perfectly. And for a while, it worked. Until A is attacked by a bear, asks for help, and O has no answer -- because a bear is completely outside its experience and has never been mentioned by B before. The fraud is revealed, the octopus was just faking it all along (and presumably, A is eaten).
Bender's not wrong. LLMs guess at what makes sense to say back to humans, statistically tweaking the feedback they receive both from other users and from the person they're talking to. Every single word is a gamble, but as the LLM evolves, it makes better gambles, tailoring its responses to sound more like a human. This is why LLMs hallucinate; they are not looking up facts most of the time but instead are guessing at what the actual answers might be with every single exchange.
The reason for this is that early LLMs didn't have access to facts, but a pool of data that wasn't necessarily authoritative. This has changed as Gemini (formerly Bard) and Copilot (formerly Bing) have evolved, both the creations of search engine-focused companies. But the point still stands: LLMs aren't interested in truth, they're interested in convincing you they're right, which are two very different things.
Understanding this -- that LLMs are essentially playing the same con game that psychics use to "cold read" their audience -- means that most generative AI shouldn't be taken seriously even though they speak with great authority. More important, they were unleashed on the general public without giving humans the tools to communicate with an entity that's very good at telling humans what they want to hear. We're bad enough at actually understanding falsehoods told by humans, much less AI.
AI doesn't have an inner life, a conscience. It just fakes it well because it's guessing you WANT it to have an inner life. And in light of this, many of the dialogues people share about their conversations with AI say more about the person than the bot. All those journalists implying that bots are out to destroy the world are in fact leading the AI on, and it's telling them what they want to hear. Or to put it another way, the journalists who are sharing shocking details from an AI, very likely nudged it in that direction, and with enough positive feedback, it told the journalist what they wanted to hear -- which incidentally makes for a great story.
Worse, buying into this -- that AI say bad things and thus are bad actors -- creates the illusion that AI think and have opinions. That's not true. They're parroting back what we tell them, and using our feedback to tweak their responses each time. Which brings us back to parrots.
What's a Stochastic Parrot?
In March 2021, Bender published “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” with three co-authors: Timnit Gebru , Angelina McMillan-Major , and Shmargaret Shmitchell. If one of those names sounds made up, it is. Gebru, at the time Google's AI ethicist, was told to take her name off of it and refused. In solidarity, Margaret Mitchell changed her name on the paper to "Shmargaret Smitchell." By February 2021, Margaret and Mitchell lost their jobs.
"Stochastic" means random and probabilistic; "parrot" means it uses understandable language to repeat it back to us. But if the plan was to make tech leaders feel ashamed for pitching AI as a person by calling their creations parrots, it backfired. Instead, OpenAI CEO Sam Altman adopted it as a rallying cry on Twitter by identifying himself as a stochastic parrot, "and so r u."
Was that simple statement, Altman dismantled Bender's entire thesis. His point, succinctly made, is that cold reading, second guessing, and telling people what we want to hear are all traits humans use to communicate with each other. And if we take that parroting behavior at face value, it implies that AI being a stochastic parrot doesn't matter, because we don't have the ability to perceive the difference. As I'm fond of pointing out, it's "smart enough" -- AI may not have an inner life, it may be eager to please, and alter its responses to tell us what we want to hear -- but that's not necessarily any better or worse than most humans.
So who's right? The more pertinent issue may be, who is the most wrong?
The Parrots Take Over
At heart is what makes a human, human. The mass collective of the Internet has made humanity more unified than any before, but also made it clear that there is a significant percentage of that population unprepared to talk to everyone else. Older generations grew up with tools for talking to humans face-to-face. The rapid expanse of Internet technology, paired with phones, has far outpaced our ability to navigate a vast sea of data (future generations growing up with this technology will likely navigate things very differently). We have access to tools to answer every question we can conceive of, but we don't seek out those answers:
Some of this is about imagination, and familiarity. It reminds me a little of the early days of Google, when we were so used to hand-crafting our solutions to problems that it took time to realise that you could ‘just Google that’. Indeed, there were even books on how to use Google, just as today there are long essays and videos on how to learn ‘prompt engineering.’ It took time to realise that you could turn this into a general, open-ended search problem, and just type roughly what you want instead of constructing complex logical boolean queries on vertical databases. This is also, perhaps, matching a classic pattern for the adoption of new technology: you start by making it fit the things you already do, where it’s easy and obvious to see that this is a use-case, if you have one, and then later, over time, you change the way you work to fit the new tool.
One of the reasons for this lack of our imagination is that our brains can only manage 150 connections at most, according to British anthropologist Robin Dunbar, and colloquially termed "Dunbar's Number." Given this overwhelming amount of people and data, it's no wonder that AI seems appealing.
Instead of presenting all results on a topic -- a folly Google best epitomized by "page rank" and the understanding that the lower your results in search the less likely anyone will ever click through to your search result -- AI distills it down to one answer. It takes all the Internet, conveys it in the voice of one "person," and presents it as fact. That's how we talk to each other today, and it takes just one slot in the 150 connections of Dunbar's Number.
The problem then isn't that AI parrots back what we want to hear; it's that humanity can't tell the difference. Or to put it another way, online dialogue has so degraded that for the bulk of people using the Internet, a friendly AI who tells you what you want to hear is a welcome reprieve from confusion, antagonism, and ignorance from "real" humans.
But we can't have it both ways. If AI is close enough to be treated like people, then it should have rights as people. We will need to respect AI rights, not simply treat them like parrots or octopi. And then we'll see how dedicated the AI companies truly are to their creations. After all, if we're no different from AI -- if we're all stochastic parrots -- then surely everybody deserves a cracker?
Please Note: The views and opinions expressed here are solely my own and do not necessarily represent those of my employer or any other organization.