Being Human, October 10, 2012
Hilary Putnam is not a household name. The Harvard philosopher’s work on the nature of reality, meaning, and language may be required reading in graduate school, but Putnam’s fame hasn’t extended far beyond the academy. But one of Putnam’s thought experiments is familiar to millions of people: what it would be like to be a brain in a vat?
Here’s how Putnam presented the idea in his 1981 book, Reason, Truth, and History:
Imagine that a human being…has been subjected to an operation by an evil scientist. The person’s brain…has been removed from the body and placed in a vat of nutrients which keeps the brain alive. The nerve endings have been connected to a super-scientific computer which causes the person whose brain it is to have the illusion that everything is perfectly normal. There seem to be people, objects, the sky, etc.; but really, all the person…is experiencing is the result of electronic impulses travelling from the computer to the nerve endings.
Philosophers have wondered for thousands of years how we can be sure whether what we’re experiencing is reality or some shadowy deception. Plato imagined people looking at shadows cast by a fire in a cave. Descartes imagined a satanic genius. Starting in the 1960s, philosophers began to muse about what it would be like to be a brain in a vat, with reality supplied by a computer. The story circulated in obscure philosophy journals for over a decade before Putnam laid it out in his book.
To track the rise of the “brain in a vat” story, I turned to the Google Ngram Viewer, a web site that can search for any word or phrase you supply in Google’s digital library of millions of books and magazines. After Putnam published his account, the story exploded, the number of times it appeared rising like a rocket into orbit. Hollywood made billions off the image, by making it the basis of the Matrix movie series.
But there’s something telling and important about the success of the brain in a vat that usually goes unremarked. Putnam’s story became an instant hit because it made sense. To see why this fact matters, imagine if Putnam had suggested you imagine an evil scientist had removed your heart, rather than your brain. He put your heart in a vat, and connected its veins and arteries to a computer, causing you to have the illusion that everything is perfectly normal.
This thought experiment would strike a modern listener as absurd. Of course, given the current state of technology, it’s also absurd to think that a human brain could be kept alive in a vat. And yet the idea that a scientist could create a full-fledged experience for someone in their brain remains plausible. It accords with how we think about the brain. We all know that the brain is where we receive sensations, store memories, experience emotions. We all know that all those sensations, memories, and emotions are encoded in electrical impulses in the brain. If indeed you could keep a brain alive, and if indeed you could supply it with the right electrical impulses, then it makes perfect sense that the person whose brain you had extracted would go on having the same experiences as before.
It’s a remarkable assumption when you think about it. None of us has held our own brain in our hands. We have no direct evidence from experience of how it works. Nevertheless, we all agree that the brain is the center of our world. It’s a world, after all, where the death of the brain is equivalent to death itself.
It was not always thus. Consider the words of Henry More, one of the leading English philosophers of the seventeenth century. In 1652, he wrote that the brain “shows no more capacity for thought than a cake of suet or a bowl of curds.”
To us this seems like madness. But More was no fool. Given the philosophical and medical traditions in which he was educated, such a low view of the brain was eminently sensible.
For all the cognitive power that the human brain contains, it’s also exquisitely delicate. It has the consistency of custard. When an ancient anatomist decided to investigate the organs of a cadaver, he would have had no trouble pulling out the heart and manipulating its rugged chambers and valves. But after death, the brain’s enzymes make quick work of it. By the time the anatomist had sawed open the skull, he might well be looking at nothing but blush-colored goo. Who could ever think that in that goo could be found anything having to do with our very selves?
When ancient anatomists examined the heart, the brain, and the rest of the body, they came up with explanations for what each organ did. Many of their explanations feel weirdly alien today. Aristotle, for example, believed that the heart was responsible for perceptions and actions. The brain was something like a refrigerator. It was made of phlegm, which was cold by nature, and so its coldness could flow down to balance out the raging heat of the heart.
It may seem bizarre that the founder of Western biology could have gotten the brain so wrong. But Aristotle was working from what was known at the time, and what he could see for himself. There were no microscopes that could reveal to him the hidden filigree of neurons in the brain and the nervous system. No one in his day even knew that nerves existed.
Other scholars in ancient Greece looked a bit more favorably on the brain. Instead of an air conditioner, they viewed it more like a pump. The body was set in motion by animal spirits, which coursed through the nervous system, inflating them like string-shaped balloons. The spirits flowed through cavities in the head, and it was the job of the brain to squeeze down and pump them on their way.
Christian scholars in medieval Europe brought together the Bible with ancient Greek philosophy—including this view of the brain. In their books on anatomy, they drew absurdly confident atlases of the insides of the head, dominated by three ventricles linked by channels in a row. It somehow didn’t matter that no one could ever see such chambers in the brains of cadavers. Anatomists had an explanation at the ready: After death, the animal spirits departed the body, leaving the ventricles to collapse like sails on a windless day.
This vision—self-consistent and powerfully explanatory—held sway over many great minds. Even Leonardo da Vinci was in its thrall. Whereas previous generations of anatomists might simply consult the work of an ancient Greek writer, Leonardo wanted to see anatomy for himself. He filled notebooks with revelatory sketches of bone, muscle, and even fetuses in the womb. And to understand the structure of the brain, he devised a brilliant experiment. After having an ox slaughtered, Leonardo injected hot wax into its skull. He waited for the wax to cool, and then opened up the ox’s skull. The wax, having filled the ventricles of the brain, would preserve their structure.
In his notebook, we can see what Leonardo saw: that the ventricles looked nothing like the medieval chambers. They swept up through the brain like hollow horns or wriggled between the hemispheres. But we can also see how Leonardo imposed onto that anatomy his medieval ideas about how the brain worked. He created links between the ventricles where none existed, so that they could remain a channel for the animal spirits that he assumed gave life to the body.
Leonardo sought to publish his anatomical research, but eventually wars and other distractions forced him to abandon the project. No one was able to see his glimmerings of the brain’s true anatomy. It remained for a younger anatomist, Andreas Vesalius, to publish such an account in his 1543 masterpiece, De Humani Corporis Fabrica.
Vesalius’s method for drawing the brain was grisly. He would saw the skulls of cadavers (typically executed criminals) at different depths. Working his way down through the brain, he would draw each exposed layer. Working with other cadavers, he would cut off the entire top of the skull cap and slit apart the membranes, exposing the furrowed surface of the cerebral cortex.
It was all rather messy, and very far from complete. But it was better than anything anyone had achieved before—better even than Leonardo da Vinci, which is certainly saying something. Vesalius even went so far as to question the workings of the ventricles. But he shied away from proposing an alternative explanation. In the sixteenth century, such a proposal could have raised the ire of the church.
Nevertheless, Vesalius pushed anatomy in a new direction. Anatomists gradually began to publish their own research, not just on the structure of the body, but also on its function. The scientific revolution replaced the four humours of the body with atoms and molecules, subject to the laws of physics and chemistry. Natural philosophers recognized that the same kinds of chemical reactions that turned grape juice into wine were at work inside the human body. In 1664, this revolution eventually reached the brain. In that year, the English physician Thomas Willis published the first book dedicated to the organ: The Anatomy of the Brain and Nerves. It was also the first book to present accurate anatomical drawings of the brain in full.
Willis succeeded in large part thanks to the company he kept. His assistant Richard Lower (who would later go on to pioneer blood transfusions) ably dissected brains completely out of their skulls. Willis’s friend Robert Boyle discovered how to preserve delicate organs like brains in alcohol. Willis now had the luxury of time to examine the brain in detail. And Christopher Wren handled the medical illustrations and microscopic examinations of the brains.
Willis combined their insights with his own observations of thousands of patients, as well as careful experiments in which he injected ink into the cerebral arteries to trace their paths. This synthesis led Willis to a radically new picture of the brain and its functions. The ventricles, which had once channeled the animal spirits, were mere infoldings. Will argued that animal spirits traveled through paths inside the brain to carry out different functions. Damage to different parts of the brain, he argued, led to different kinds of disorders.
Like any scientist, Willis was still enmeshed in his age. He knew nothing about electricity, and so he could not guess that the phenomenon he witnessed in a lightning storm was taking place in his own head. Well over a century after his death did scientists such as Luigi Galvani discover that electric current could travel down nerves, finally banishing animal spirits from neurology.
In Galvani’s time, electricity was an amusement, the stuff of parlor tricks. No one imagined that it could power civilization. Nor could they imagine that electricity could deliver messages nearly instantaneously. In 1844 Samuel Morse set up the first commercial telegraph line from Washington to Baltimore, and one of the first messages transmitted on it came from the Democratic National Convention. The convention delegates, who had gathered in Baltimore, picked a senator named Silas Wright as their nominee for vice-president. They needed to know if Wright would accept or refuse the nomination, but he was in Washington. The president of the convention decided to send a message to Wright by telegraph.
Wright immediately wired back: No. The delegates refused to believe that a message could fly down a wire. They adjourned the convention and sent a flesh-and-blood committee by train to see Wright in person. Wright turned them down again. After the committee came back to Baltimore with the news, the convention president took some delegates to the telegraph office to see the machine for themselves. And yet, he later wrote, “many of the delegates shook their heads and could not but think the whole thing a deception.”
Imagine how much they might have shaken their heads if they had been told that their experience of the telegraph was made possible by similar pulses of electricity traveling through their nerves and brains.
The telegraph’s dribble of digital pulses foreshadowed today’s torrents of Internet communication. By the mid-twentieth century, mathematicians had developed a method for using a digital system of ones and zeroes to carry out computations. Transistors sent signals to one another, combining flows of information to produce new outputs.
It became increasingly clear that brains and electronics shared much in common. In 1963, the neuroscientist Jose Delgado displayed their seamless union on a cattle ranch. He inserted an electrode into the brains of bulls, which he could activate with a remote control. The bulls charged toward Delgado, and with a touch of the remote, he could force them to skid to a halt within just a few feet of him.
To philosophers like Hilary Putnam, this must have been a thrilling moment. Indeed, even as Delgado was controlling animals with electrodes, Putnam was developing a computational theory of mind, in which sensations traveled into the brain as input, and the brain then functioned like a computer to produce output commands. Putnam was no neuroscientist and didn’t care much about the details of how one neuron connected to another. Instead, he argued that the structure of thought itself showed signs of being the product of computation. It didn’t much matter what carried out those computations—neurons or transistors could do the job. It was this cultural evolution that made the brain in a vat so easy for people to absorb. If, as Delgado had shown, electronics and the brain were seamless, then surely it should be possible for an evil scientist to have his way.
Over the past two decades, the brain-in-a-vat thought experiment has itself evolved. Imagine that you are facing death. Now imagine that a well-meaning scientist offers to make a perfect map of your brain, recording all 100 trillion synaptic connections that encode your memories, your feelings, everything that is you. She then uploads that information into an equally detailed model of a human brain, one that is capable of being supplied with inputs, and which then produces outputs of its own. Perhaps your uploaded mind exists solely within a virtual universe. Meanwhile, your biological brain dies off with your own failing body.
In some circles, brain uploading is considered a serious possibility as computers continue to grow more powerful, and as we learn more about the structure and the function of the brain. For philosophers, it presents a new puzzle. The computer of Hilary Putnam’s thought experiment extends its sphere, taking over the brain’s own computation, until there is no brain left. If you are uploaded into a computer, would your self still be yourself? How could you even know whether you’ve already been uploaded? How could you know if you had ever been outside of a computer?
These are entertaining questions to consider, but they are far from practical ones. The mind may indeed be computational, but that does not mean it resembles any computer humans have built. It processes information in a massively parallel fashion, rather than doing so sequentially, as manmade computers do. Its memory does not exist like bits on a hard drive, but in a distributed, dynamic pattern of connections. Its computations do not create a full-blown representation of the world, but only create useful predictions, which allow us to control our bodies.
Nor do our brains exist in isolation, like some laptop sitting on a table that can be simply powered up. They are embedded in bodies, and they have evolved to depend on a continual flow of feedback about how well their predictions have fared in the outside world. And, finally, out of all that computation, consciousness emerges. While many scientists are exploring the nature of consciousness in inventive ways, no one has a theory that makes sense of it yet.
Are we brains in a vat? Strictly speaking, it’s hard to prove we’re not. But in any world—real or manufactured—we still know so little about how brains work that we wouldn’t be able to put Putnam’s thought experiment into practice.
Copyright 2012 Being Human. Reprinted with permission.