Are brains and computers alike?
Is the brain a computer? At first glance, the answer might seem obvious: no. Brains and computers are very different. Brains are biological, made from organic material—gooey stuff—while computers are electronic devices made from non-organic materials—clicky stuff—practically the opposite of brains. Settled, right?
Is this a meaningful difference, though? When we ask if two things are equivalent, the answer may be widely different depending on how we compare them. We can focus on what things are made of or how things look, but these are often the most superficial—and less interesting—comparison criteria.
Let’s take a different approach and ask where two things are alike regarding what they can do, that is, comparing their functionality. Is this a meaningful comparison? Here’s an example of why there may be some meat to this. Think of the concept of a chair. How many shapes, materials, colors, sizes, and styles can you imagine a chair could have?
A chair made of wood and a chair made of steel are very different in composition, fabrication process, durability, etc., but they are still equivalent in terms of what chairs are used for—sitting down. Is a stool a chair? Is a rocking chair a chair? What about a gaming chair, the place you sit in your car, or the thing astronauts sit on inside a rocket ship? Heck, if we are talking about stuff you can sit on, even a flat enough boulder is a chair!
But wait, why are we talking about chairs? What does this have to do with brains and computers? That is a fair question, and here’s the point I want to make. When comparing things, one often significant point of comparison is regarding what things can do. That is, comparing things in terms of their function. If two things perform the same function, they are, in a sense, equivalent. Let’s call this a functionalist paradigm and get back to whether brains and computers are equivalent.
In this article, I want to tackle this question from the point of view of computational functionalism---a specific form of functionalism that I will lay out in more detail later. What I want to do in this article is, first, convince you that this question is much more profound than it seems at first sight, and second, give you some arguments on both sides of the discussion. As usual with these philosophical articles, I won’t (can’t) give you a definite answer, but I hope you come back on the other side more informed and perhaps have a bit of fun in the process.
Computational functionalism in a nutshell
We will begin by dissecting the question “are brains and computers equivalent” and define precisely what we mean by “brain,” “computer,” and, well, “equivalent”, for the matter. Then, we will go over some of the most common counter-arguments to the computationalist hypothesis and, finally, present some arguments for why one could believe brains are indeed “just” computers.
What is a brain?
At first glance, this seems like an obvious question. A brain is that gooey stuff inside your skull where all cognition happens. It’s where, in some way we still don’t quite understand, you think your thoughts and feel your feelings. It’s also where—so we are told—something as elusive as consciousness resides.
However, let’s go back to our chair example. Many things out there don’t look like human brains but still have a similar function. Other animals have brains—granted, very similar to ours in many cases. But then you have octopuses (or octopi, whatever floats your boat). They have something like 2/3rds of their neurons in their arms! But almost no one would claim they don’t have at least some limited form of cognition. And what about aliens? Do we think any living, sentient, self-aware being out there will have something strikingly similar to this gooey, wrinkled piece of meat we call the brain?
As we’ve seen, for a functionalist, these differences are unimportant. What we care about when talking about brains is the function they perform—that’s the whole deal with functionalism! The functionalist will claim that whatever a brain is, is what a brain does. Cognition, sentience, consciousness, and all subjective experiences are all entirely defined in terms of their function. So, when functionalists summon the concept of a brain, they think about what a brain does: in short, hosting a mind.
So, let’s define the brain as the kind of physical substrate that can host a mind, with all the complex cognitive processes, subjective experiences, emotions, qualia, and everything else you want to claim a mind does.
What is a computer?
Like before, this seems like an obvious question, but at this point, we know better than to rush to a conclusion. Intuitively, a computer is something that does some advanced forms of calculation. You can have mainframe computers, desktops, laptops, smartphones, smartwatches, mini-computers like a Raspberry Pi, and very weird computers like what goes inside a self-driven car or a spaceship.
Furthermore, even though most actual computers we have today are made of silicon-based transistors laid out in tightly-knit circuit boards, this is hardly the only way to implement a computer. Before fully electronic computers, we had electromechanical computers with valves and moving parts that sounded like trains. Crazy dudes have proposed more than one design of a fully working hydraulic computer. We even have bioelectrical prototypes that actually mix gooey stuff with traditional circuits.
So, just as before, let’s consider the function of a computer. This is actually way easier than with brains because unlike brains—which are things out there in nature—computers are something we made up. And we have a whole field in computer science called computability theory dedicated precisely to studying what a computer can do.
In short, a computer is an abstract mathematical construct capable of computing any effectively computable function. An effectively computable function is any mathematical function that can be calculated with a finite series of mechanical steps without resorting to guessing or magical inspiration—in other words, an algorithm. Modern electronic computers are one possible physical embodiment of such abstraction, but they are hardly the only possibility.
So, let’s define a computer as any device capable of computing all computable functions.
What is computationalism
Now, we are ready to reframe our original question in more precise terms.
Let’s start with functionalism and build our way up. When talking about theories of mind, functionalism is the theory that all cognitive processes are completely characterized by their function. This means that, for example, whatever consciousness is, is just about what consciousness does. In other words, if you have some way to reproduce what consciousness, or any other cognitive process does, it doesn’t matter which substrate you use, what you will have is genuinely the same thing.
Put more boldly, any system that performs the same functions as a mind is a mind, period.
Computational functionalism goes one step further and claims that all cognitive processes are actually computable functions. That means a sufficiently complex and suitable programmed computer could perform these functions to the same extent a brain does, and so it would presumably host a mind. More formally, it claims that all cognitive processes are computational in nature and thus can, in principle, be implemented in a substrate other than biological brains, as long as it supports general-purpose computation.
In other words, computationalism is precisely the claim that brains are computers, understanding both these terms with all the nuances we have already discussed.
To be clear, computationalism doesn’t claim modern computers are conscious or even that our current most advanced AI systems are on the right path to ever becoming truly intelligent or conscious. It just claims there is some way to build a computer, at least in principle, that is indeed self-aware, fully intelligent, and conscious, even if we have no clue what it takes to build one.
Now, there are at least two forms of computationalism; let's call them weak and strong (these are my definitions, not standard). Weak computationalism is just the claim that cognition is computation. This means that all forms of intelligence—understood as problem-solving, reasoning, etc., irrespective of whether there is a sentient being in there—are just advanced forms of computation. In other words, there is nothing super-computational in humans, animals, aliens, or any other form of intelligence. A sufficiently powerful computer can be as intelligent as anything else.
Strong computationalism goes further and claims that consciousness, sentience, self-awareness, and qualia—that is, all subjective experiences—are also computational in nature. Thus, a sufficiently powerful computer with the correct software would indeed be conscious and experience an inner world, just as we presume all humans and many animals do.
The roots of computationalism can be traced back to McCulloch and Pitts, the fathers of connectionism, a competing theory of mind. They were the first to seriously suggest that neural activity is computational and to propose a mathematical model for it, a precursor of modern artificial neuronal networks. However, it wasn’t until well into the 1960s that computationalism started to be developed as a philosophical theory of mind.
However, perhaps the most famous instance of a computationalist perspective in popular culture is the Turing Test. In a seminal paper in 1950, Alan Turing, recognized by everyone as the forefather of computer science (Turing machines are named after him!), proposed what he called “the imitation game”, a thought experiment to determine whether a computer was thinking.
In this thought experiment, a computer and a human are placed behind closed doors, able to communicate with a second human—the judge—only through a chat interface. The judge can ask both participants anything, and it must ultimately decide which is the computer and which is the human. If the computer manages to confuse the human judge more often than not, Turing claims we must acknowledge the computer is performing something indistinguishable from what humans call thinking. According to functionalism, then the computer is thinking.
Now that we've sorted our concepts, let's tackle the question.
Is computational functionalism true?
We've already dismissed the most obvious counterargument to computationalism—that brains are gooey while computers are not. But even if we disregard this superficial difference in composition, there are still important structural differences between existing brains and computers that we cannot just gloss over. And then, once we clear those more direct counterarguments, we will turn into more nuanced and profound differences between brains and computers.
Mind independence of substrate
Computers have a distinctively hierarchical structure, from hardware to software, starting at transistors and building up to registers, microprocessors, kernels, operating systems, and applications—in a very simplistic description. If any part that performs any specific function breaks, it all falls apart.
On the contrary, the brain seems to have some structure, but it is much more fluid and flexible. You can remove entire brain regions, and it will often rewire itself and learn to perform the affected functions almost to perfection.
On the other hand, there seems to be a clear distinction between software and hardware in computers, to the point where you can move software around independently of the hardware. There is no such thing in the brain—as far as we know, we can't simply paste your thoughts into some portable device, load them into a freshly minted meat suit, and get a second copy of you.
Maybe this is it. Perhaps the interconnected, seemingly intrinsically inseparable nature of thoughts and substrate in the brain is fundamental to consciousness and self-awareness. Maybe as long as you can move the mind out of the brain, it cannot be a true mind.
Recommended by LinkedIn
Well, if this were the case, and the mind is inseparable from the brain, this realization would obliterate most known religions. Forget about any form of transcendental survival of the soul. As soon as your brain dies, your mind is dead. No uploading to the cloud, metaphorically or literally. However, while this might certainly upset a bunch of people, I'm not a religious person, so I'm not bothered by this particular argument.
And even if this were the case—a true mind must be inseparable from the substrate—, this seemingly explicit separation between hardware and software is, first, just an implementation detail, and second, kind of a useful lie as well.
If you look inside a modern microprocessor, it is not trivial to distinguish where hardware ends and software begins. Not only there is programmable code running at the microprocessor level but, even more importantly, the simplest logical circuits inside a computer are both hardware and software at the same time. Just the clever disposition of transistors in some specific layout is enough to make a circuit that adds, or multiplies, or does basically any other computable thing. The hardware is the software in these cases.
So, while some things—like cables—are clearly hardware and other things—like the browser you’re using right now—are clearly software, it is simply not true that there is a clear-cut point of separation between these two concepts. It’s just a useful abstraction.
And then we have artificial neural networks, like the ones powering ChatGPT. These are universal approximators, meaning they can compute any function to an arbitrary degree of precision. And they are much closer to this idea of a flexible architecture where software and hardware intertwine in ways that are hard to clearly differentiate. While modern artificial neural networks are far from a precise simulation of brains—and that’s by design, they are not even trying to simulate brains—there is no a priori reason why we can’t build a silicon computer that perfectly mimics the physical processes inside a real, biological brain.
This means any attack on computationalism based on structural differences between actual instances of brains and computers is futile. We cannot generalize a negative claim from a finite set of negative examples. No matter how many computers you find that are not brains, you can never be sure a computer cannot, by definition, ever be equivalent to a brain. This is just the typical problem of induction, so we must find another angle of attack, something that targets the fundamental definition of computer rather than any concrete implementation.
Syntax versus semantics
The most famous attack on computationalism is John Searle's Chinese room experiment. It is meant as a counterargument to Turing's imitation game, showing that even if a system can flawlessly simulate understanding, it may not be understanding at all. Here is a quick recap.
Suppose a man is placed inside a room without communication with the outer world except via an "input" and "output" window. Through the input window, the man receives, from time to time, sheets of paper with strange symbols on them. Using a presumably very big book, the man's only job is to follow a set of dead simple instructions that determine, for any combination of input symbols, what other strange symbols he must write on a new sheet and send it through the output window.
Now, here is the plot twist. The input symbols are well-written questions in Chinese, and the output symbols are the corresponding, correct answers. The huge book is designed such that an appropriate answer will be computed for every possible input question. So, when seen from the outside, it seems this system understands Chinese and can answer any plausible question in this language. However, Searle argues that neither the man, the book, nor any part of the system actually understands the questions. (If this sounds eerily similar to ChatGPT, give the man a round of applause; he came up with this thought experiment half a century before.)
Searle's point is that syntax and semantics are two qualitatively different layers of understanding, such that no degree of syntax manipulation—which is, according to Searle, all a computer can do—can amount to actually understanding the semantics of a given message.
For all experts today, it is evident that ChatGPT is only manipulating vectors in a way that happens to produce mostly coherent answers, but there is no real "mind" inside ChatGPT doing any sort of "understanding". In a similar sense, Searle believes any computer is fundamentally incapable of true understanding in any meaningful sense of the word simply because computation acts at the syntactic level and is incapable of bridging the gap to semantics.
Searle's attack is pretty strong. There is not much a computationalist can show as a defense other than trying to point to some place in the Chinese room where understanding happens. The best such defense is to claim that even if no single part of the system—neither the man, nor the book, nor the room itself—can be said to understand anything, the system as a whole does understand. In the same way, no single part of your brain is conscious, but you are as a whole, maybe? Still, not a very strong defense if you ask me.
Qualia and knowledge
Another interesting challenge to computationalism is the thought experiment known as "Mary's Room", conceived by philosopher Frank Jackson. It deals with the nature of subjective experiences—or qualia—and how important they are for grounding knowledge about reality. The story goes like this.
Mary is a neuroscientist who has lived her entire life in a black-and-white room, learning everything there is to know about color vision through books and scientific literature. She knows all the physical facts about color and how the human brain processes visual stimuli. However, she has never experienced true color.
One day, she finally exits the room and sees color for the first time. The question is whether Mary learns something qualitatively new about color, something that can only be learned through direct experience rather than by reading or indirectly studying the phenomenon of color perception.
For many, Mary's experience is indeed qualitatively new. If you think so, this raises critical questions for computationalism. If computationalism asserts that all cognitive processes can be reduced to computational functions, then one might wonder whether knowing all the physical facts about a phenomenon is equivalent to experiencing it.
Mary's Room suggests a gap between knowledge and experience—one that cannot be bridged by computation alone. This challenges the idea that a computer, no matter how advanced, could genuinely "know" or "experience" qualia in the same way a human does. At least, it tells us ChatGPT cannot truly understand what seeing red, feeling warmth, or being in love feels like.
The implications of this thought experiment are profound for the computationalist perspective. It suggests that even if a computer could simulate all the functions of a brain, it might still lack the intrinsic, subjective experiences that characterize human consciousness. In other words, while a computer might process information about colors and respond appropriately, it would not "know" what it is like to see red or feel the warmth of sunlight—experiences that are inherently qualitative and personal.
This distinction between knowledge and experience underscores a fundamental limitation of computationalism: it may account for the mechanics of cognition but fails to address the richness of conscious experience. As such, the Mary's Room thought experiment serves as a poignant reminder that understanding the brain as a computer may overlook the essential nature of qualia.
On the other hand—you may argue—if Mary indeed knew everything that could be known about color perception other than "how it feels," how is that any different from actual perception? In a sense, the brain doesn't truly "experiences" color. It just perceives some electrical impulses correlated with the frequency of the light waves coming through our eyes. Can't color perception be explained as just another layer of simulation on top of an actually blind, purely computational brain?
In any case, Mary's room doesn't preclude us from posing extremely advanced mechanical entities that emulate all the physical aspects of color perception. Nothing is inherently unmechanical in our current understanding of how light frequency stimulates some sensors that produce an electric signal in the brain. It is the subjective experience of what it feels like to see red that we cannot reduce to that mechanistic explanation. And this is precisely what functionalism attempts to capture: if two mechanisms perform the exact same functions, they are the same.
Other Attacks
In addition to the prominent arguments against computationalism, several other critiques have emerged that challenge the notion of equating brains with computers. Two notable lines of reasoning come from Roger Penrose's insights on mathematical understanding and the connectionist perspective on reasoning.
Non-computability of human creativity
Roger Penrose, a renowned physicist and mathematician, argues that certain mathematical insights are inherently non-computable. He posits that human mathematicians can grasp concepts and solve problems that transcend algorithmic computation, suggesting that there are aspects of human thought that cannot be replicated by any computer, regardless of its complexity.
As an example, Penrose believes that while we have a formal proof that some mathematical problems are indeed unsolvable using any method of effective computation, human mathematicas can summon some hyper-computational abilitites to gain insights into these problems. In the same sense, many propose that creativity and artistic expressions in humans are clear examples of non-computable thought processes.
Connectionist arguments against symbol manipulation
Connectionism is an alternative theory of mind that posits mental processes can be understood through networks of simple units (like neurons) rather than through traditional symbolic manipulation. Proponents of connectionism argue that reasoning does not necessarily require the explicit representation of symbols or the manipulation of those symbols according to formal rules. Instead, they suggest that cognitive processes can emerge from the interactions within a network, where knowledge is distributed rather than localized.
This perspective challenges the computationalist assumption that reasoning must rely on structured representations and symbol manipulation, proposing instead that cognition may arise from more fluid and dynamic processes akin to those found in neural networks, and that symbolic manipulation of the kind that happens in a traditional algorithmic procedure is but an emergent phenomenon in the brain, at best, but not a necessary feature of an actual mind.
Moving forward
Is the brain a computer? We honestly don't know. It is a tough question, perhaps the hardest question we can ever conceive, because it challenges our most valuable and effective tool for discovering truths about the world: science. Stay with me for a second.
Science—understood as the process of generating hypotheses, producing predictions from those hypotheses, and testing the results of those predictions via experiments to falisify or validate the hypotheses—is fundamentally a computational process. Any procedure for experimental verification must be laid out in such a simple, unambiguous set of steps as to be replicable by other scientists all around the world even if they don't speak our language.
Furthermore, the language of science is mathematics, the most strict and computable of all human-made-up languages. And computers are everywhere in science too. It is inconceivable today to perform any mildly complex experiment without the help of computers to run simulations, find patterns, and, well, compute stuff. Science is deeply computational, and it has always been. The forefathers of science, Galileo, Newton, Leibniz, and the rest, explicitly focused on quantifiable, measurable phenomena; the only things we could all agree were certain.
But the inner workings of the human mind and the nature of consciousness may be neither measurable nor quantifiable. Even when we finish mapping all neuronal pathways in the brain and discovering everything that happens at all physical levels in the gooey stuff, and load that into a computer, we might end up creating just another Mary, knowing everything about how a mind works, but incapable of truly experiencing what having a mind feels like.
And the sad part is that, if that's the case—if the mind is indeed non-computable—then we might actually never know for sure. After all, the best tool we have to understand how stuff works, Science, may be nothing more than a fancy algorithm.
Or maybe, that's the fun part.