A new AI can craft Harry Potter stories or code an app. But only we humans can truly think. Right?
Illustration by Nikki Ritmeijer

A new AI can craft Harry Potter stories or code an app. But only we humans can truly think. Right?

Welcome to New World Same Humans, a weekly newsletter on trends, technology, and society by David Mattin.


🎧 If you’d prefer to listen to this week’s instalment, go here for the audio version of New World Same Humans #27🎧


This week: we need to talk about neural networks.

Many of you will already know why. You’ve seen the demonstrations, read the stories, surfed both the wave of hype and the backlash to it.

I’m here to reignite that hype all over again. Well, not quite. But I am here to question whether the hype backlash is, itself, over hyped. And that means asking some big questions about the nature of minds, and what the emergence of AI means for our shared future.

Our journey starts, though, with a short paragraph.


Do you mind?

Read this:

It is a curious fact that the last remaining form of social life in which the people of London are still interested is Twitter. I was struck with this curious fact when I went on one of my periodical holidays to the sea-side, and found the whole place twittering like a starling-cage. I called it an anomaly, and it is.

You’ll agree that it’s a passable imitation of the kind of prose written in Victorian England. Specifically, the passage is a pastiche of the writer Jerome K Jerome, best known for the late 19th-century comic classic Three Men in a Boat.

The tech-heads among you will already know something else about that paragraph. It was written by an AI. Specifically, by a new language model called GPT-3, which was released in closed beta last week. GPT-3 is the creation of OpenAI, the research laboratory founded in 2015 by Elon Musk and others. It’s a neural network trained on a vast amount of text: hundreds of billions of words, which is pretty much the entire internet. That means it’s able to complete a whole range of tasks – from translation, to essay writing, to literary imitation – without any need for pre-task fine tuning, and with only the most trivial instructions.

The results are astonishing. For example, when asked for: ‘a screenplay for a film-noir hard boiled detective story by Raymond Chandler about the boy wizard Harry Potter’ GPT-3 turned out a decent attempt. Asked to write a news article about the recent split in the Methodist church, GPT-3 produced a piece that fooled 88% of people into believing it was the work of a person.

It doesn’t end there: GPT-3 can also write code. Here it is coding a replica of the Google homepage in response to a simple one sentence description of that page.

Across the last two week these results have set the internet on fire. They’ve even caused some observers to speculate that we’re closer to artificial superintelligence than we thought. Indeed, excitement reached such a height that OpenAI CEO Sam Altman sent a tweet asking people to calm down, and pointing out that GPT-3 still makes ‘silly mistakes’.

Since then, plenty of writers have stepped forward to deescalate the hype. At the heart of their efforts have been a core set of messages. That GPT-3 has simply absorbed a lot of text, built statistical models of which words are most likely to follow other words, and is able to apply those models to cut up and recombine language in billions of plausible ways. That without that input, GPT-3 could produce no output.

That it does not, therefore, create original ideas. That it doesn’t understand anything it is saying. That it’s not in any sense a mind.

Bring me some cheese

All of that seems sensible at first glance.

I’m not here to pour fuel on the fire of naïve GPT-3 hype. But many of those who loudly scorn it – those determined to point out that GPT-3 is in no sense a mind or a general intelligence – are standing on ground much less secure than they believe it to be. It seems to me that it’s no longer a trivial matter to defend the idea that neural networks such as GPT-3 are not minds. Because that defence relies on us knowing things about the nature of minds that we don’t, in truth, know.

I have two sons; they are twins, just about to turn seven. I’ve watched the evolution of their minds from day one. Haven’t they ‘simply’ taken input and built a series of models around it, which they now use to recombine that input in all kinds of ways and so produce output of their own? Isn’t it true for them, too, that without any input they would be incapable of output? Surely what we call an ‘original idea’ is simply a novel recombination of pre-existing inputs, so that no idea created by a human brain comes, as it were, ‘out of nowhere’?

I know that naïve GPT-3 hype is overblown, and I get where critics of it are coming from. GPT-3 isn’t a general intelligence in the commonly accepted meaning of the phrase, or anything close to one. It can’t, for example, read a textbook on astrophysics and start having meaningful new astrophysics ideas of its own. Then again, neither can my sons.

My point is that those most loudly criticising naïve enthusiasm for GPT-3 and artificial general intelligence are often themselves resting on a set of naïve assumptions about the nature of mind. If I ask one of my sons to bring me a piece of cheese, he can. If I ask GPT-3 to show me a picture of some cheese, it can. We all intuitively feel there is a great difference between the processes at work in the first instance, and those in the second. But when we say that GPT-3 doesn’t understand what it is doing in the same way as my son does, what do we mean? We mean it doesn’t ‘really know’ what cheese is and what it’s doing. What does that mean? Does that mean it knows less about cheese than my son? That it is less able to place cheese in its proper contexts? That is is not aware of cheese? Philosophers of mind grapple with these questions, but the answers are highly contested.

No alt text provided for this image

At the heart of this is a single question. Is GPT-3 doing something qualitatively different from the human brain? Or is it doing essentially the same thing, but less so?

There are those who think that the human brain is just a highly complex information processing machine. The most famous among them, the American philosopher Daniel Dennett, is unequivocal: human thought is nothing more than information processing, there is no ‘ghost in the machine’, and consciousness – that sense we have of being a self in a state of awareness – is only an artefact of that processing, a kind of illusion. Those arrayed against Dennett say this can’t be so; that consciousness is not an illusion, and that something more than only information processing must be happening in the brain in order to generate it. This is the central question in the philosophy of mind and the scientific struggle to understand consciousness: are brains only information processors, or is something else going on?

OpenAI say the GPT-3 neural network contains 175 billion parameters; the previous iteration, made last year, contained 1.5 billion. The human brain contains trillions of connections between its roughly 100 billion neurons. If human thought is nothing more than information processing then the functioning of GPT-3, or indeed an abacus, is not qualitatively different from the functions of a human brain. What happens, then, when neural networks start to approximate the level of complexity we see in the brain? Will they then start to produce outputs that are even more similar to those produced by human brains? Or will the continued absence of those kinds of outputs push us towards the conclusion that the human brain does more than only process information?

Certainly there are credible experts who seem to believe something close to the former. Leading neural network expert Geoffrey Hinton has said: ‘when you get to a trillion [parameters], you're getting to something that's got a chance of really understanding some stuff.’ When neural networks achieve that level of complexity, should we then understand them as doing true ‘thought’? Will they start to show signs of authentic understanding? How will we recognise these signs, if we see them?

Watching me, watching you (aha)

One way to tell the human story is via the evolution of our tools. The hammer was an extension of our arm. The personal computer was an amplifier of our cognitive and communicative abilities. GPT-3 is, in one sense, simply another tool. But as much as we should eschew naïve hype, it is just as naïve to pretend that it’s nothing more than that.

Unlike all our previous tools, neural networks throw into new relief the deepest questions we can ask about ourselves. And unlike with previous tools, the evolution of neural networks may help us to answer those questions. They may in time produce outputs that help us confirm the idea that human thought is information processing, and that the 1.4 kilograms of folded matter inside your skull, which creates the sense you have right now of being a person listening to these words, is only a kind of brilliant calculating machine.

For my part, I think it’s likely that in practice things will play out differently. And that the mysteries of human thought will remain mysterious for some time to come.

As neural networks become more complex, it’s easy to believe they will show new types of capabilities. Perhaps they’ll become better at answering commonsense questions of the kind GPT-3 struggles with, such as ‘does cheese melt in the fridge?’ Perhaps they’ll even be able to read astrophysics textbooks and come up with new theories of their own. It’s hard to see, though, at what point we should hold up our hands and say, ‘okay, that’s real understanding!’ What is the magic bullet output that should cause us to concede that a neural network is really a mind? The Turing Test is an attempt to deal with the impossibility of that question. It tells us that asking whether a machine ‘really thinks’ is meaningless; all we can meaningfully ask is whether a machine produces outputs that are, in our eyes, the same as outputs produced by the human brain. If it can – sure, go ahead and call that ‘thinking’ if it makes you happy.

We already have machines that can produce outputs that fool most people; GPT-3 does just that. And yet we’re still asking if machines ‘think’. It turns out that Turing’s test has not settled the argument. But his fundamental point was right: we don’t have a secure definition of the word ‘thinking’, so it means nothing when we ask if machines ‘think’. In practice, then, it seems to me that as the outputs of neural networks become more complex, and as it we become accustomed to coming into contact with them in our daily lives, questions such as ‘but does this AI understand what it is saying?’ and ‘is this AI really a mind?’ will start to fade away. We’ll become more comfortable with the idea that we can’t answer these questions about AIs, just as we can’t really answer them about a three-year-old. Or our best friend. Or ourselves.

Via habitual contact with neural networks we’ll come to believe – without articulating it fully to ourselves – that if we call what we do ‘thinking’, then we may as well apply the same label to what they do.

And then we’ll be faced with something authentically new. That is, a recognition of a non-human, non-organic form of intelligence, with its own categories of perception, its own way of seeing the world, perhaps totally alien to us, perhaps competing with our own. We will believe ourselves to move among a new kind of person. And crucially it will be that shift – not a shift in technology, but in our own beliefs – that will constitute the real revolution.

Because that will be the start of an entirely new cohabitation; a new way of sharing the world. We’ll know, then, that the machines are watching us, just as we watch them.


Attack of the robot dolphins

Four quick snippets for your mind to process this week.

🐬 A New Zealand special effects company called Edge Innovations has created incredibly realistic robot dolphins. The creations are intended to replace captive dolphins at marine entertainment parks; watch the video and you’ll see why test audiences have not been able to tell them apart from the real thing. Interested? It’s $26 million for one dolphin.

🤑 Back in NWSH #25 I wrote about how TikTok is at the centre of the emerging battle between two separate internets. Now US investors including the iconic VC firm Sequoia Capital are making a play to buy the platform from its Chinese parent company, ByteDance, to avert the TikTok ban that President Trump is threatening. Watch this space.

🤦‍♂️ Almost half of the British public believe that Russia interfered in the 2016 Brexit referendum, according to a new poll. This news come days after the British government released a much-delayed report, which reveals that it did not bother to follow up on suspicions that Russia was attempting to influence the referendum result.

👾 You can now build a 90s-era PC inside the social video game Roblox, and then play the iconic first-person shooter game Doom on that virtual PC. I’m dizzy.


All for one

Thanks for reading this week.

The human brain is the most complex object in the known universe. We can think of it as a piece of organic technology; one far in advance of any neural network we’ve so far managed to build. And it’s over 200,000 years old.

No alt text provided for this image

Those facts lie close to the heart of the core message of this newsletter. Yes, we live in a fast-changing and complex world. But amid all that we’re still the same humans, with the same old human nature. New World Same Humans is on mission to interrogate that fascinating predicament – to understand the trends reshaping the world around us, and what they mean for the way we’ll live, work, play and think in the decades ahead. In 2020, that mission feels more urgent than ever.

You're reading the LinkedIn version of NWSH, but there's so much more on offer if you sign up for the full experience. Subscribers get:

  • The regular newsletter on Sunday evening – in text and podcast form
  • A forthcoming interview series in which leading thinkers share their take on what lies ahead
  • A Slack community that supercharges your personal mission to build a better future.

*** It takes ten seconds to join, and you can sign up here ***

See you next week,

David.


David Mattin sits on the World Economic Forum’s Global Future Council on Consumption.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics