What’s the difference between science and philosophy? Scientists address questions that can in principle be answered by means of objective, empirical investigation. Philosophers wrestle with questions that cannot be empirically resolved and hence remain matters of taste, not truth.
Here is a classic philosophical question: What creatures and/or things are capable of consciousness? That is, who (and “who” is the right term, even if you’re talking about a jellyfish or sexbot) belongs to the Consciousness Club?
This question animated “Animal Consciousness,” a conference I attended at New York University last month. It should have been called “Animal Consciousness?” or “Animal ‘Consciousness’” to reflect the uncertainty pervading the two-day meeting. Speakers disagreed over when and how consciousness evolved and what is required for it to occur. A nervous system? Brain? Complex responses to the environment? The ability to learn and adapt to new circumstances? And if we suspect that something is sentient, and hence capable of suffering, should we grant it rights?
In my last post, I focused on the debate over whether fish can suffer. Scholars also considered the sentience of dogs, lampreys, wasps, spiders, crustaceans and other species. Speakers presented evidence that creatures quite unlike us are capable of complex cognition.
Biologist Andrew Barron argued that bees, in spite of their minuscule brains, are not mindless automatons. Their capacity for learning rivals that of mammals. When harmed, bees stop eating and foraging as if they were depressed. Bees, Barron concludes, are conscious.
Does that mean all animals are? No, Barron doesn’t think jellyfish, which lack a centralized nervous system, are conscious. If you poke a box jellyfish, it simply moves in the opposite direction. As Barron made the case for excluding jellyfish from the Consciousness Club, I felt sorry for them.
Philosopher Peter Singer, citing Barron’s research on bees, considered whether cockroaches and bedbugs can suffer, and if so what, if anything, we should do about it. To convince us that octopuses are conscious, philosopher Peter Godfrey-Smith showed us a video of them goofing around while a pufferfish watched, seemingly out of pure curiosity.
Looming over these disputes is the solipsism problem. I know I am conscious, but I can’t be absolutely sure that anything else is conscious, because I have access only to my own subjective experience. I’m pretty confident that you and other humans are conscious, because we’re so similar. But my confidence in the consciousness of non-human things diminishes in proportion to their dissimilarity from me.
Neuroscientist Christof Koch suggested in his 2012 book Consciousness that someday science might solve the solipsism problem with a “consciousness meter.” This hypothetical instrument could determine whether a jellyfish or smart phone is conscious and how conscious it is. Measuring an object’s consciousness would be as straightforward as measuring its temperature.
The consciousness meter is a splendid example of begging the question. Scientists cannot build a consciousness meter until they reach agreement on what physical conditions are necessary and sufficient to produce consciousness. But scientists cannot reach agreement on those conditions unless they have a consciousness meter, a means of solving the solipsism problem.
As long as we lack a solution to the solipsism problem, theories of consciousness will be unconstrained and hence wildly divergent. Koch proselytizes for integrated information theory, which holds that even a single proton might be a little bit conscious. The theory implies that consciousness pervades the entire universe, as decreed by the ancient mystical doctrine panpsychism.
At the other extreme are so-called eliminative materialists, who question whether anything is really conscious, including humans. An advocate of this position is philosopher Daniel Dennett, who spoke at “Animal Consciousness.” In his recent book From Bacteria to Bach and Back, Dennett calls consciousness an "illusion." He comes close to suggesting that we are zombies, beings that appear conscious but actually lack an inner life. (See my rebuttal of Dennett’s argument here.)
The solipsism problem haunted other meetings at NYU I’ve attended over the past two years. At “Ethics of Artificial Intelligence,” computer scientist Kate Devlin considered whether sexbots, robots designed to have sex with humans, might be conscious and hence deserving of rights. At a workshop on integrated information theory two years ago, Koch and other participants debated, only half jokingly, whether dark energy might conscious.
Last spring, NYU hosted “Is There Unconscious Perception?” Scholars argued over the implications of conditions such as blindsight, which is caused by brain damage. Subjectively, you feel blind, but if someone throws a ball at you, you will catch it. Blindsight proves that perception and other cognitive functions need not be accompanied by consciousness, according to philosopher Ned Block, an organizer of the meeting.
Block reiterated this point at “Animal Consciousness,” which he also helped organize. Other scholars disagree with Block’s interpretation of blindsight data, contending that people with blindsight might possess visual awareness even if they insist that they don’t. That strikes me as a very weird claim. But my point is that even if you restrict your discussion of consciousness to humans, you can’t escape the solipsism problem.
As long as we can’t solve the solipsism problem, we will favor theories of consciousness for subjective reasons. You are big-hearted, so you grant consciousness to spiders, jellyfish, sexbots, dark energy and thermostats. (According to integrated information theory, it might feel like something to be a nuclear warhead!) I am a snooty tight-ass, so I restrict consciousness to humans, primates and a few especially clever birds, like crows. Your Consciousness Club is capacious, mine vanishingly small.
A wonderful, testy exchange between two grizzled philosophical warriors at “Animal Consciousness” exposed the profound divide in modern approaches to consciousness. On the stage was Dennett, who for decades has argued that conventional materialism can account for consciousness.
Sitting in the front row was Thomas Nagel, whose 1974 essay “What Is It Like to Be a Bat?” challenged materialist accounts of consciousness. Nagel reprised these arguments in his 2012 book Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False, which argues that science needs "a major conceptual revolution" to account for the emergence of life and consciousness.
Nagel asked Dennett “on what authority” he insists that consciousness can be reduced to brain states or some other conventional physical processes. Nagel seemed genuinely aggrieved. So did Dennett when he responded that explaining the world in physical terms is what science does. [See Clarification.] There is no rational reason to make an exception for consciousness.
Do you prefer Dennett’s perspective, Nagel’s, or something in between? You may think your preference is wholly rational and objective, but it is based more on taste than truth. We cannot escape our subjectivity when we try to solve the problem of subjectivity. Or so I argue in a book I am writing, Mind-Body Problems.
Clarification: Thomas Nagel, to whom I emailed this post, responded: "Your brief account of my exchange with Dennett leaves the issue somewhat obscure. My question was, by what authority does he allow his external view of himself as a physical system to overrule the evidence presented to him directly in his first-person experience. I think he acknowledges that this is a conflict over the authority of these two points of view in determining our conception of reality. He believes the first has decisive priority. I do not. But I certainly didn't feel 'aggrieved'-- and neither, I'm sure, did Dan."