Annaka Harris’s recent book Conscious presents a concise and well-written exploration of consciousness, with an emphasis on the mystery of subjective experience. Harris approaches this difficult topic from a secular, scientific perspective, and she pulls no punches in attacking our deeply held intuitions. At the same time, Harris is not afraid to venture outside the scientific mainstream by endorsing the idea of panpsychism—that all matter contains some basic form of subjective experience.
While Conscious contains a wealth of ideas worthy of discussion and critique, I will focus on one issue in particular. In her quest to illuminate consciousness using modern science (before embarking on speculations about panpsychism), Harris ironically and unwittingly falls prey to Cartesian dualism, the seductive idea of René Descartes that mind and matter are two different things, composed of different substances, occupying different realms. Almost natural to human thought, dualism has ensnared not only Harris, but many other eminent explorers of consciousness. We can use Harris’s book, however, to expose this trap.
The word consciousness may evoke all kinds of different concepts, so Harris starts by attempting to define the term. She adopts a definition proposed by philosopher Thomas Nagel in his landmark essay, “What Is It Like to Be a Bat?” According to Nagel, an organism has consciousness “if there is something that it is like to be that organism.” This definition is far more metaphysical than scientific. As Nagel himself points out, science cannot reveal “what it is like” to be another creature, or why it even should be like anything. Science can tell us only about things that can be observed and measured externally, such as neuronal activity and behaviors, including reports of “what it is like.”
But Harris uses Nagel’s metaphysical definition for a reason: she wants to distill the essence of what makes consciousness so interesting and mysterious. For the layperson at least, the mystery lies not in the physical brain processes studied by science, but in the phenomenon of subjective experience. Harris thus uses the term consciousness to refer only to a person’s subjective experience or inner life, apart from the underlying brain activity. “Consciousness,” she says, “is what we’re referring to when we talk about experience in its most basic form.” Harris even invokes philosophical zombies—those pesky creatures who have all the behavior and brain activity associated with human consciousness, but none of the inner experience. Although Harris is agnostic as to whether philosophical zombies are physically possible, she uses them to test our intuitions about the function of consciousness. For Harris, consciousness is the very thing philosophical zombies lack.
Harris’s definition of consciousness as pure subjective experience is not in itself problematic in a discussion of metaphysics. But Harris does not dwell in the metaphysical realm for long; she quickly jumps into the field of modern science, eager to assess the role of consciousness from a grounded, objective perspective. To that end, she poses two concrete questions: whether we can detect conclusive evidence of consciousness in another human being, and whether consciousness is “essential to our behavior.” And she attempts to answer these questions using the findings of modern neuroscience, even though the “consciousness” she refers to is a metaphysical construct, beyond the reach of any scientific investigation. This leads her straight into the pit of Cartesian dualism.
Let’s examine the second question more closely—it is the more provocative one and is meant to shock our intuitions. Harris answers it in the negative, tentatively concluding that consciousness is not essential to behavior—that consciousness is just a passive witness of our actions, a ride-along observer. “In theory,” she says, “few (if any) of our behaviors need consciousness in order to be carried out.” Before addressing the merits of this counterintuitive proposition, let’s consider how Harris phrases the question. She asks not only whether consciousness affects “behavior,” but also whether consciousness affects “the physical system that’s conscious.” (Emphasis added.) One can see here a Cartesian separation of the mental and physical realms.
Consider yet another rephrasing that is even more telling. “In other words,” Harris asks in a footnote, “if consciousness comes at the end of a stream of information processing, does the fact that there is an experience make a difference to the brain processing that follows? Does consciousness affect the brain?” This question suggests that two events occur sequentially: first, the “information processing” (which I take to mean the brain activity associated with consciousness), and second, inner experience. But, unless we subscribe to Cartesian dualism, we know that only one event actually occurs—the information processing. Nothing happens “at the end” of this processing—it’s all just brain activity. And it’s futile to ask whether consciousness affects the brain if the only thing out there is the brain itself.
Don’t think that Harris denies the reality of consciousness or subjective experience: she most assuredly does not. Indeed, she points out the absurdity of claiming that consciousness is an illusion: after all, having an illusion itself requires consciousness. Harris is right in this regard. The denial of consciousness is a behaviorist idea that was abandoned decades ago, and most neuroscientists today treat subjective experience as a very real phenomenon, worthy of careful investigation. True, we don’t (yet) know why or how certain brain activity manifests itself as subjective experience—that’s the “hard problem” of consciousness that Harris attempts to resolve through panpsychism. But we do know that whatever subjective experience is, it must be a physical thing—an aspect or manifestation of certain observable neuronal processes, not part of some separate realm or substance, as Descartes proposed. Subjective experience is not something we can conceptually isolate from brain activity, while still adhering to the dictates of physicalism.
Despite viewing consciousness metaphysically, Harris relies primarily on modern science to support her conclusion that consciousness is not involved in behavior. For instance, she cites neuroscientific experiments—especially the infamous one by psychologist Benjamin Libet—showing that ostensibly voluntary bodily movements are in fact initiated by brain activity “before subjects feel they make the decision to move.” Crucially, however, these experiments treat inner experience and the associated brain activity as one and the same event—conscious brain processing—and they address only the relationship between conscious brain processing and unconscious brain processing. These experiments show, at most, that unconscious brain processes cause certain motor actions before the occurrence of conscious brain processes associated with feelings of decision-making.
Libet-like experiments do not, and cannot, say anything about subjective experience as happening apart from or “at the end of” brain activity. When probing subjective experience, scientists necessarily rely on its external manifestations, namely, behavior (such as verbal reporting of subjective experience) or neuronal activity, as revealed in various brain scans. Knowing human physiology, scientists presume that all behavior is causally connected to brain activity. No scientist (or anyone else for that matter) can observe or detect a metaphysical consciousness that is detached from brain activity or behavior. No one could ever tell a conscious person apart from her philosophical zombie twin based on objective criteria. Simply put, neuroscience has nothing to say about Harris’s version of consciousness, and so it cannot support her conclusions.
In this light, Harris’s definition of consciousness as purely subjective experience, combined with her claim that consciousness does not affect behavior, results in an inherently dualistic vision of reality. In this view, consciousness is some incorporeal entity hovering above the brain, somehow observing the body’s behavior but incapable of affecting it. This is a ghost in the machine—Cartesian dualism at its core. Harris envisions a much weaker ghost than Descartes had in mind, but a ghost nevertheless.
If we are to reject Cartesian dualism, we must define consciousness as comprising both inner experience and the associated brain activity. We cannot view the mind and body as separate actors that may or may not “affect” each other. If subjective experience is real, it must be part of the physical world, so we must view the entire corpus of our thoughts and feelings—including feelings of “self” and “will”—not as a ghost in the machine, but as a part of the machine itself. And if we look at it that way, the only sensible answer to Harris’s second question is a resounding “yes”—consciousness does affect behavior.
I think Harris would agree with this assessment. For instance, in her discussion of free will, she acknowledges that the “brain, as a system, does have a type of free will … in that it makes decisions and choices on the basis of outside information, internal goals, and complex reasoning.” This sounds like a description of physical consciousness. Harris simply does not accept (or acknowledge) that the physical “system” she describes is all there is, and that subjective experience is just a part of this system. Now, once we throw dualism away, this physical system can be further divided into two distinct physical parts: (1) brain processes that cause behavior without conscious awareness (tics, for example), and (2) brain processes associated with conscious awareness. The question then becomes, can conscious neuronal processes affect behavior, or are they somehow isolated from the unconscious processes? I believe the answer is obvious, but let’s test some intuitions.
Imagine you’re attending a tech company’s unveiling of a new humanoid robot. This robot’s head is encased in transparent plastic, allowing you to see the complex array of electronic circuits powering its “brain.” Multi-colored LED lights are embedded throughout this circuitry, and they show exactly which circuits are active at any given time. The robot’s default mode is to wander slowly around the room while avoiding obstacles. But if you say the robot’s name out loud, it will respond and start interacting with you. You notice that when the robot simply moves around, only a few blue lights are active inside its head, but when the robot interacts with someone, numerous green lights begin to flicker.
You reasonably deduce that the green flickering lights identify the circuits responsible for the robot’s complex interactive behavior, and that only a modest portion of the circuitry is needed to guide the robot around the room. Now, gauge your reaction if someone declares, I bet those green light circuits do not affect the robot’s interactions—they’re just passively monitoring its behavior without doing anything. You might think this is possible, but you would want a thorough explanation of such seemingly grotesque engineering. Why create a whole bunch of complex circuitry for nothing? And you would want proof that removing all this circuitry will not affect the robot’s functionality.
Further, if you claim that physical consciousness does not affect behavior, you would have to agree to a myriad seeming absurdities, such as the following:
- My conscious desire to bake a cake for my child’s birthday had nothing to do with my subsequent act of purchasing flour and sugar and then baking a cake.
- My conscious examination and contemplation of a test question had nothing to do with my answering it correctly.
- My sudden conscious recall on the way to work that I forgot my briefcase had nothing to do with my turning around, going back home and retrieving the briefcase.
You would have to conclude that the above actions were caused solely by subliminal brain processing of unknown origin, not any conscious desires or thoughts. If this sounds implausible, we’re on the same page.
Even if we set aside our intuitions, there is no shortage of scientific proof that the brain circuitry responsible for consciousness plays a big role in behavior. Indeed, we need look no further than the Libet-like experiments cited by Harris. In these experiments, she explains, “subjects watch a special clock and, according to an instrument similar to the second hand on a traditional clock, mark the exact moment they decide to move.” So, for these experiments to work, the subjects have to accurately report their own subjective experience. We must conclude, then, that it is the subject’s experience that caused the behavior of reporting it. Otherwise, we couldn’t trust the results of the experiment—we wouldn’t know when the actual experience occurred. And we cannot say that the reporting itself caused the experience, since we know that many conscious experiences occur without being reported. In other words, you must have a conscious experience in order to accurately report having it (unless you’re a philosophical zombie, and even then, you’d be lying!), but you need not do anything to have a conscious experience. Again, Harris would likely agree; she poignantly observes that one should not be able to think and talk about consciousness without first experiencing it.
In his book, Consciousness and the Brain, neuroscientist Stanislas Dehaene describes numerous experiments that he and his team performed in trying to discover the neural “signatures of consciousness”—brain activity that is not merely correlated with, but is responsible for, subjective experience. These signatures, Dehaene explains, are “present whenever conscious perception occurs and absent whenever it does not.” To shed light on the difference in brain activity between subliminal processing and conscious awareness, Dehaene relies heavily on the reports of subjects as to whether or not they consciously perceived a particular stimulus. As he puts it, a key ingredient in the science of consciousness is that “subjective reports can and should be trusted.”
When subjects report perceiving an image flashed before their eyes, brain scans reveal certain signatures of consciousness, including a “global ignition” of neuronal activity in disparate areas of the brain. Conversely, when the subjects deny perceiving the image, these signatures of consciousness are conspicuously absent. If there were no causal connection between conscious perception and the act of reporting, we would expect no correlation between signatures of consciousness and reports of seeing the image, and we could not trust the experimental results. Of course, it’s not just the behavior of reporting conscious experiences that is caused by consciousness (though this behavior is a staple of everyday life). Dehaene tells us, for instance, that consciousness is necessary for tasks reliant on working memory, such as solving problems in multiple sequential steps.
This isn’t to say that all behaviors require consciousness—far from it. Most of our highly routine and reactive actions, such as walking, scratching or even recognizing familiar symbols, can occur on autopilot, and Dehaene talks at length about the powers of the unconscious brain. But more complex activities do require consciousness, and it’s important to recognize that the unconscious and conscious modes of processing can and do coexist within the human cognitive system. In his bestseller, Thinking, Fast and Slow, Daniel Kahneman describes the functioning and interactions of these two modes: System 1 governs our fast, simple, minimally conscious actions, and System 2 governs slow, complex, deliberative actions. So if you’re concerned about the implications of Libet-like experiments (which show that unconscious brain activity can predict certain bodily movements that subjects think are consciously willed), keep in mind that these experiments involve very simple acts, such as flicking a wrist, which are quintessential System 1 behaviors.
Libet-like experiments, therefore, are perfectly consistent with the fact that a substantial part of human activity is caused by conscious thoughts or processes. Harris comes close to acknowledging this. “It’s not clear,” she says, how the “types of simple motor decisions” seen in Libet-like experiments “relate to more complex decisions, like choosing what to eat for lunch or deciding between two job offers.” But then Harris brushes aside this huge caveat, without discussing it further. That’s unfortunate. If someone asks me if I consciously intended to drink my coffee, I would be inclined to say yes, but I would not be surprised if a Libet-like experiment could predict in advance each time I pick up the mug to take a sip. But if my brain scan reveals the text of a haiku poem before I consciously become aware of composing it, I would be shocked indeed.
The bottom line is that each of our actions hangs on a long chain of physical causes. It’s a mistake to think, as Harris seems to, that subjective experience cannot be part of that chain, for what else could subjective experience be? The vast and complex machinery of the brain is not haunted by ghosts. So, unless we succumb to the siren song of Cartesian dualism, we must think of subjective experience as part of that machinery, as a physical link in the causal chain. If we do, it becomes virtually impossible to sustain the claim advanced by Harris and others that consciousness is just sitting in the bleachers rather than playing the game.