GGR Newsletter
March 2025
GGR Newsletter
March 2025
Meat vs Machine
Why the Brain is not a Computer
Matt Mosso
March 2025
As the scientific community continues to grapple with the nearly inconceivable complexity of the brain, we reach for language that captures the essence of this mysterious organ. Metaphorical language has long been a useful tool for pushing the conceptual boundaries of the unknown as they take the amorphous threads of new ideas and weave them into a familiar shape that we can communicate with others. For Aristotle, the known object the brain most resembled was a radiator. (I mean, look at all those folds and bumps! All that surface area must be useful for radiating heat!). The once functionless, wiggly jiggly mass of meat gained purpose when likened to something more intelligible. As neuroscience has dispensed with Aristotle’s intuitions, parallel discoveries in neuroscience and computing in the last 100 years have shaped a new perception of the brain. It is no radiator…! It is… a computer!
Computers are clearly complex, and it is tantalizing to project the broader concept of computing on to how the brain may process information. On the surface, this comparison appears apt – computers take in some input and produce an output based on a set of instructions. Indeed, we receive all kinds of ‘inputs’ from the environment and the resulting behavior could be viewed as a kind of output. Yet, does this metaphor capture the full extent of how the brain processes or computes information? If not, does the projection of this metaphor onto the processes which underpin brain function mislead future intuitions about how the brain ought to work?
The first inconsistency in this comparison between brain and computer is the way the ‘computing’ is done in the first place. A traditional computer in a basic sense operates in a serial, step-by-step manner. In this way, computers possess a linear quality to them. The material processes which carry out these abstract instructions happen extremely fast relative to human perception which gives us the illusion that all these computations occur at once, or in parallel. But critically, to compute the next step in a chain of computations each preceding step must occur with perfect accuracy. In comparison, the brain is devoid of any serial processing and fundamentally computes information differently. What’s the difference?
I am not going to pretend as if I know how the brain computes information exactly. But as my undergraduate biochemistry professor, Dr. Tom Huxford profoundly asserted, “structure implies function”. Early discoveries by Ramon y Cajal contextualized the wiggly jiggly meat mass as composition of individual components called neurons. Neurons revealed themselves as individual nodes comprising a larger network. To fully appreciate the scale and sheer complexity of this network, one must understand that each neuron in the human brain, of which there are roughly 80 billion, connects with - and is connected to - thousands of others. What the structure of the brain reveals to us about the nature of its computation is that of a parallel and distributed system that bears zero resemblance to a serial processing device.
A reasonable comparison between neurons and semiconductors, the basic processing units of brains and computers, is that they exist either in an on or off state. Introductory neuroscience courses will generally teach neuron activity as an all or nothing event, which sensibly binarizes its activity. If the neuron is active, it could be represented as being a 1, or a 0 if it is off. This is the basis for computational neuroscience and supports the brain-as-computer metaphor. Thus, for simplicity, neuroscience readily conceptualizes the brain as a digital device. The utility of this analogy should not be taken lightly. It is the foundation of biomedical discoveries in brain-computer interfacing for operating prosthetic arms or producing artificial speech production. But, as the biologist in me cringes and a yearning for a complete understanding of the fundamental workings of brain processing burns - I am reminded that neurons cannot be reduced to binary switches. Take for example an analogy between neurons and light switches. If you flip the switch, the light will reliably turn on at the precise moment the switch passes the threshold from off to on. If neurons were like light switches, we would expect that whenever the neuron reached a critical threshold it would turn on and release neurotransmitters just like flipping a light switch results in light. For reasons beyond the scope of this article, neurotransmitters are not reliably released every time a neuron turns on [see: Review about release properties of synapses]. Even stranger is the fact that the threshold to turn on a neuron is readily adjustable and the signal transmitted along a neuron can be stopped in its tracks before it reaches its terminal stages that result in neurotransmitter release. In fact, turning off a neuron can induce it to turn back on if they possess a special type of channel. What this reveals about neurons and broader neural systems is that the computations are not reducible to 1s and 0s. The biological cascades controlling neuron activity function as a series of checks and balances that can toggle the magnitude and effect of a neurons ‘on’ or ‘off’ state. It would be hard to imagine how an algorithm that relies on tight binary representations could operate on such an unreliable system such as the brain.
Yet, as we thrust ourselves into a post-modern world filled with AI chat bots, artists, and playwrights it can be tempting to find similarities between biological and artificial neural networks. Given ample, good quality data and well-founded algorithms, neural networks can identify meaningful patterns and regurgitate those patterns to convey meaning in a humanistic way. Essentially passing the Turing test, we are living through a quantum leap in computing. State of the art machine learning models can identify broader contextual patterns, giving them an almost indistinguishable human-like capacity. Yet, a critical difference between these models and the brain is that the model that defines an artificial neural network is not adjustable in real time. The artificial model that the algorithm has learned for processing inputs into outputs is static. If a person in Kentucky and a person in Iowa who have never used Chat GPT before ask it the same question, they will receive the same answer (arbitrarily induced randomness in these models exist to produce slightly different answers; but this sort of proves the point). The only way the model will change its answer is if the model itself is updated after supplying 10,000 state of the art GPUs with gigawatts of power for a small price of $10,000,000. Then, a new model with new answers will emerge. Compare that to my puny wiggly jiggly meat mass that has written and rewritten this paragraph after iterating my own mental model about these concepts – for free and with the power supplied by a bagel and coffee. That is to say, our brain still occupies a computational domain these intelligent computer models have not yet breached. Not only does our brain have the capacity to learn, but it can also change how it learns without having to be shut off…sleep be damned. The inputs and outputs regulated by biological neural networks are malleable and subject to change in an instant. In this way, the computational principles that the brain utilizes to function appear different from the computational abstractions and principles layered atop binary code underlying machine learning models.
It is no secret that the principles which underlie exactly how the brain works to enact and modify the range of behaviors and perceptions we experience in our entire life remain a mystery. Metaphor is a powerful tool for articulating the ineffable and the brain-as-computer metaphor has proved useful for conceptualizing the inner workings of brain processing. The utility of this metaphor is clear from an engineering perspective and need not be dispensed like Aristotle's radiator. Yet, the satisfying solutions which penetrate the reality of brain processing may lay outside the metaphorical capacity of computers as we currently understand them. If the expectation is that models of brain processing ought to abide by computational principles implemented in silicon chips, models will be shoehorned into fitting critical features of computers. Whether implicit or explicit, emerging models of brain processing will converge toward algorithms that operate on binary representations in a step by step fashion. Indeed, typical models of a particular brain function like language interpretation or memory consolidation often conceptualize each step as a serial process.
It is easy to forget that the map is not the territory and that the insights gained by applying the brain-as-computer metaphor may not facilitate the future discoveries neuroscience seeks. To peel back a new layer of reality, perhaps it is time to escape the gilded cage by invoking a new metaphor that redefines our vision for how this wiggly jiggly meat mass seamlessly resolves our world.