How does matter make mind? More specifically, how does a physical object generate subjective experiences like those you are immersed in as you read this sentence? How does stuff become conscious? This is called the mind-body problem, or, by philosopher David Chalmers, the “hard problem.”

I expressed doubt that the hard problem can be solved--a position called mysterianism--in The End of Science. I argue in a new edition that my pessimism has been justified by the recent popularity of panpsychism. This ancient doctrine holds that consciousness is a property not just of brains but of all matter, like my table and coffee mug.

Panpsychism strikes me as self-evidently foolish, but non-foolish people—notably Chalmers and neuroscientist Christof Koch—are taking it seriously. How can that be? What’s compelling their interest? Have I dismissed panpsychism too hastily?

These questions lured me to a two-day workshop on integrated information theory at New York University last month. Conceived by neuroscientist Guilio Tononi (who trained under the late, great Gerald Edelman), IIT is an extremely ambitious theory of consciousness. It applies to all forms of matter, not just brains, and it implies that panpsychism might be true. Koch and others are taking panpsychism seriously because they take IIT seriously.

At the workshop, Chalmers, Tononi, Koch and ten other speakers presented their views of IIT, which were then batted around by 30 or so other scientists and philosophers. I’m still mulling over the claims and counter-claims, some of which were dauntingly abstract and mathematical. In this post, I’ll try to assess IIT, based on the workshop and my readings. If I get some things wrong, which is highly likely, I trust workshoppers will let me know.

The Hard-to-Understand Problem

One challenge posed by IIT is obscurity. Popular accounts usually leave me wondering what I’m missing. See for example Carl Zimmer’s 2010 report for The New York Times, Koch’s 2009 Scientific American article, “A ‘Complex" Theory of Consciousness,” or his 2012 book Consciousness. The theory’s core claim is that a system is conscious if it possesses a property called Φ, or phi, which is a measure of the system’s “integrated information.”

Phi corresponds to the feedback between and interdependence of different parts of a system. In Consciousness, Koch equates phi to “synergy,” the degree to which a system is “more than the sum of its parts.” Phi can be a property of any entity, biological or non-biological. Even a proton can possess phi, because a proton is an emergent phenomenon stemming from the interaction of its quarks. Hence panpsychism.

Another key phrase is “conceptual structure,” which seems to correspond to the manner in which information is embodied and processed in a particular system at a particular moment. The conceptual structure, which I envision as a circuit diagram, or flow chart, determines—or rather, is--the conscious experience.

Tononi kicked off the NYU workshop with a 90-minute tutorial on IIT, followed by another hour from Koch. Their presentations paralleled their 2015 paper on IIT, “Consciousness: here, there and everywhere?” Although the paper has whimsical passages (the title echoes an old Beatles song), this excerpt conveys IIT’s forbidding density:

…the central identity of IIT can be formulated quite simply: an experience is identical to a conceptual structure that is maximally irreducible intrinsically. More precisely, a conceptual structure completely specifies both the quantity and the quality of experience: how much the system exists—the quantity or level of consciousness—is measured by its Φmax value—the intrinsic irreducibility of the conceptual structure; which way it exists—the quality or content of consciousness—is specified by the shape of the conceptual structure. If a system has Φmax = 0, meaning that its cause–effect power is completely reducible to that of its parts, it cannot lay claim to existing. If Φmax > 0, the system cannot be reduced to its parts, so it exists in and of itself. More generally, the larger Φmax, the more a system can lay claim to existing in a fuller sense than systems with lower Φmax. According to IIT, the quantity and quality of an experience are an intrinsic, fundamental property of a complex of mechanisms in a state—the property of informing or shaping the space of possibilities (past and future states) in a particular way, just as it is considered to be intrinsic to a mass to bend space–time around it.

Tononi and Koch were at their clearest citing empirical evidence for IIT. The cerebellum, which seems to have less internal connectivity—and hence lower phi--than other neural regions, can be damaged without significantly affecting consciousness. Moreover, brain scans of paralyzed, uncommunicative, “locked-in” patients reveal higher phi in those showing other signs of being conscious.

But the more Tononi, Koch and others talked about information, integration and conceptual structure, the less I understood these notions. I also wondered how scientists can measure a brain’s phi, or integrated information, given their ignorance of how brains encode information.

When I confessed my bafflement to Tononi, he acknowledged that IIT takes a while to “seep in.” Others at the workshop also seemed osmosis-resistant. Participants often called on Tononi to settle disputes about the theory, but his oracular responses did not always clarify matters.

Toward the end of the workshop, someone asked Tononi whether IIT posits that mind and matter are distinct phenomena or that mind is just a byproduct of matter. In other words, is IIT a materialist or dualist theory of mind? Tononi smiled and replied, “It is what it is.” (Perhaps he meant, “IIT is what IIT is.”)

Participants seemed especially confused by an IIT postulate called “exclusion.” According to IIT, many components of a brain—neuron, ganglia, amygdala, visual cortex--may have non-zero phi and hence mini-minds. But because the phi of the entire brain exceeds that of any of its components, its consciousness suppresses or “excludes” its components’ mini-minds.

Exclusion helps explain why we don’t experience consciousness as a jumble of mini-sensations, but it has odd implications. If members of a group—say, the IIT workshop--start communicating so obsessively with each other that the group phi exceeds the phi of the individuals, IIT predicts that the group will become conscious and suppress the consciousness of the individuals, turning them into unconscious “zombies.” The same could be true of smaller or larger groups, from a besotted couple to the United States of America.

The Conscious CD Problem

Of course, I could simply be too ignorant to assess IIT. General relativity and quantum mechanics also baffle me, and they’ve fared pretty well. It is thus significant that computer scientist Scott Aaronson, who fully grasps IIT’s technical details, doubts the theory. Speaking after Tononi and Koch, Aaronson described himself as the “official IIT skeptic.” He added, “My lunch seems not to have been poisoned, so thanks,”  

Aaronson reprised criticisms he leveled on his blog last year. (See also his followup post.) His main complaint is with IIT’s claim that high phi produces consciousness. “Phi may be a necessary condition for consciousness, but it is certainly not a sufficient condition,” he said.

Aaronson said he could design a wide variety of simple information-processing systems—a two-dimensional grid, for example, running error-correcting codes like those employed in compact discs—possessing extremely high phi. As he stated on his blog, IIT “unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly ‘conscious’ at all: indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data.  Moreover, IIT predicts not merely that these systems are ‘slightly’ conscious (which would be fine), but that they can be unboundedly more conscious than humans are.” [Bold in original.]

Aaronson also faulted proponents of IIT for defending the theory inconsistently. For example, IITers cite the cerebellum’s low phi and lack of consciousness as evidence for the theory, but they can’t be sure that the cerebellum is unconscious; they are simply making a plausible inference, based on common sense.

And yet when confronted with Aaronson’s reductio ad absurdum grid argument, Tononi embraced the absurdum; he suggested that maybe the grid is conscious, and he chided Aaronson for appealing to common sense. Aaronson objected on his blog: “You can’t count it as a ‘success’ for IIT if it predicts that the cerebellum is unconscious, while at the same time denying that it’s a ‘failure’ for IIT if it predicts that a square mesh of XOR gates is conscious.” [Bold in original.]

At the workshop, Aaronson called this strategy “heads I win, tails you lose.” In other words, by appealing to common sense when it suits their purposes and rejecting it when it doesn’t, IITers make their theory immune to falsification and hence unscientific.

Aaronson nonetheless complimented IITers for producing a theory precise enough for him to test. As he put it on his blog, “the fact that Integrated Information Theory is wrong—demonstrably wrong, for reasons that go to its core—puts it in something like the top 2% of all mathematical theories of consciousness ever proposed.  Almost all competing theories of consciousness, it seems to me, have been so vague, fluffy, and malleable that they can only aspire to wrongness.”

The Circularity Problem

Tononi shrugged off Aaronson’s criticism. “We have to be prepared to be extremely surprised,” he said. He also suggested that Aaronson had critiqued an outdated version of phi. IIT is “a work in progress,” Tononi said.

Physicist Max Tegmark, Aaronson’s MIT colleague and an IIT enthusiast, presented half a dozen alternative mathematical definitions of phi, which he suggested might be less problematic than the version critiqued by Aaronson. Aaronson said his analysis applied to all the versions of phi proposed by Tegmark and Tononi.

The wrangling over definitions of phi reminded me of the inability of researchers in the once-trendy field of complexity to agree on what they were studying. As I have pointed out, there are at least 40 different definitions of complexity--some of which involve information, another notoriously protean concept.

That brings me to philosopher John Searle’s critique of IIT. Searle voiced his criticism not at the NYU workshop (which he did not attend) but in a 2013 review in The New York Review of Books of Koch’s book Consciousness. Searle complained that IIT depends on a misappropriation of the concept of information:

[Koch] is not saying that information causes consciousness; he is saying that certain information just is consciousness, and because information is everywhere, consciousness is everywhere. I think that if you analyze this carefully, you will see that the view is incoherent. Consciousness is independent of an observer. I am conscious no matter what anybody thinks. But information is typically relative to observers. These sentences, for example, make sense only relative to our capacity to interpret them. So you can’t explain consciousness by saying it consists of information, because information exists only relative to consciousness.

See also the subsequent exchange between Tononi, Koch and Searle, in which Searle said IIT “does not seem to be a serious scientific proposal.” At the NYU workshop, Tononi and other proponents of IIT rejected Searle’s critique, claiming that Searle misrepresents their view of information. But to my mind, Searle zeroed in on IIT’s major flaw.

In fact, Searle’s point applies to other information-centric theories of consciousness, including one sketched out by Chalmers more than 20 years ago (which helps explain his affinity for IIT). Information-based theories of consciousness are circular; that is, they seek to explain consciousness with a concept—information—that presupposes consciousness.

I spelled out this concern in a 2011 post, “Why Information Can’t Be the Basis of Reality.” The post addressed not IIT specifically but the more general proposition that information is a fundamental property of nature, along with matter and energy. I traced this idea to physicist John Wheeler’s notion of “the it from bit,” which was inspired by apparent resonances between information theory and quantum mechanics. I wrote:

The concept of information makes no sense in the absence of something to be informed—that is, a conscious observer capable of choice, or free will (sorry, I can't help it, free will is an obsession). If all the humans in the world vanished tomorrow, all the information would vanish, too. Lacking minds to surprise and change, books and televisions and computers would be as dumb as stumps and stones. This fact may seem crushingly obvious, but it seems to be overlooked by many information enthusiasts. The idea that mind is as fundamental as matter—which Wheeler's "participatory universe" notion implies--also flies in the face of everyday experience. Matter can clearly exist without mind, but where do we see mind existing without matter? Shoot a man through the heart, and his mind vanishes while his matter persists.

The Solipsism Problem

Moreover, like all theories of consciousness, IIT slams into the solipsism problem (which is at the heart of Aaronson’s critique). As far as I know, I am the only conscious entity in the cosmos. I confidently infer that things like me—such as other humans--are also conscious, but my confidence wanes when I consider things less like me, such as compact discs and dark energy. (There was a mini-discussion at the workshop over whether dark energy could be conscious, leading Koch to quip, “Let’s not be baryonic chauvinists.”) The solipsism problem is especially acute for IIT because of its panpsychic implications.

IITers have proposed the construction of a “consciousness-meter” that measures the phi and hence consciousness of any system, from an iPhone to a locked-in patient. But such an instrument would not really be detecting consciousness any more than current brain scans do. No conceivable instrument can solve the solipsism problem.

To sum up: Going to the workshop bolstered my bias toward mysterianism. I doubt IIT is taking us closer toward solving the mind-body problem, and I predict that the theory’s metaphysical baggage—panpsychism and all the rest—will limit its popularity.

But I loved the workshop. Watching all those brainy participants grappling with the deepest conundrums of existence, citing Descartes, Leibnitz and Hume as well as papers less than a year old, was the most exhilarating intellectual experience I’ve had in a long time. Whatever phi is, my brain brimmed with it by the workshop’s end.

Pondering IIT has also deepened my appreciation of the mind-body problem. In an age of rampant scientism, we need theories like IIT to help us rediscover the mystery of ourselves.

Postscript: For responses to this article from attendees of the IIT workshop and others, see this followup post

Further Reading:

A "Complex" Theory of Consciousness.

Christof Koch on Free Will, the Singularity and the Quest to Crack Consciousness.

Why information can't be the basis of reality.

Do Big New Brain Projects Make Sense When We Don't Even Know the "Neural Code"?

Is Scientific Materialism "Almost Certainly False"?

Why I am Not An Integrated Information Theorist.

Can Information Theory Explain Consciousness?

Post-Postscript: The IIT workshop was co-sponsored by NYU's Center for Mind, Brain and Consciousness (co-directed by Chalmers and Ned Block) and Global Institute for Advanced Studies.