Skip to main content

Consciousness and "Crazyism": Responses to Critique of Integrated Information Theory

Readers react to a critique of a radical new theory of consciousness.

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


I recently wrote about a workshop at New York University on integrated information theory, an ambitious theory of consciousness. Below are responses to my article. Unless otherwise indicted, the responses are from people who attended the workshop. Other attendees who want their comments posted should email me. –-John Horgan

Adam Pautz, Brown: I was a participant in the two-day conference at NYU discussed in this article. I thought the conference was great and I learned a lot. I agree that IIT has weird predictions. But if a theory of consciousness fits the data and is more elegant than the alternatives, maybe we should accept the theory even if it has some weird predictions. After all, some of our best physical theories have weird predictions too.

My central concern about IIT – which I feel wasn’t really answered at the conference – is different. I don’t have a clear grasp on what the theory is a theory of. If you look at how IIT is formulated, it is not just a theory of when consciousness is present or absent. It is more specific; it is a theory of the *amount* of consciousness in an arbitrary system. The theory is that the *amount* of consciousness in a system is its level of Phi. So, for instance, it implies that, if a 2D grid has a Phi value that is (say) 10 times greater than your brain, then this 2D grid has 10 times the “amount” of consciousness that you have (even when you are fully awake and have had your morning coffee). Indeed, it implies that the amount of consciousness in such a system is *unbounded* - since its Phi level is unbounded. My worry about this is not Aaaronson’s – namely, that such predictions are counterintuitive. Rather, my point is that it is not even clear what these predictions mean. What could it even mean to say that a 2D grid might have, say, “10 times the amount” of consciousness that you have when you are fully awake? In general, I don’t yet know what proponents of IIT mean by talk of the “amount” of consciousness – a supposedly unbounded dimension of our experiences (and indeed one that has a ratio scale, on IIT, since Phi has a ratio scale). This is not yet an objection, but a request for clarification. However, if proponents of IIT cannot clearly explain what unbounded dimension they have in mind, then it becomes an objection, because it means that IIT is a theory without a clear subject matter.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


By the “amount” of consciousness in a system, do they mean the number of experiences it has? Or the *intensity* of its experiences – so that if you turn up the volume on the radio, the “amount” of consciousness you are enjoying goes up? I am sure that they mean neither of these things. (They would not say, for instance, that the 2D grid has 10 times *the number* experiences than you, or that it has auditory experiences that are 10 times “louder” than yours, or anything of the sort.) Or do they perhaps mean the “amount” of information represented by an experience? But all experiences – even the experience of a blank wall – rule out infinitely many possibilities. Finally, by the “amount” of consciousness in a system, do they mean something about how much information that is being represented by conscious experience is being cognitively accessed (so that when you just wake up and are inattentive, you count as having a low amount of consciousness)? But this can’t be what they mean either. For one thing, their view implies that there can be a large amount of consciousness even in a system, such as the 2D grid, *where there is no cognitive access at all*. 

Another, separate problem with IIT, it seems to me, is that it is very abstract, and it is hard how certain specific facts about experiences might be explained in the terms of IIT. To take just one example: suppose you here five tones and your experiences of them are separated by equal pitch intervals. This is a phenomenological fact about your conscious experiences. But, given the resources of IIT (Phi value, nodes, cause-effect powers, etc.), it is very hard to see how this fact might be explained.

Garrett Mindt, Central European University: I was in attendance as well (thanks to David Chalmers and Hedda Morch for organizing the workshop), I share some of the same worries Adam Pautz expressed previously regarding the *amount* of consciousness and what exactly that entails. In a way I am an IIT sympathizer (not entirely convinced, but interested), and so see these worries as avenues of growth for the theory, rather than damning consequences. 

On the note about explaining what it means for something to have a higher/lower *amount* of consciousness, perhaps IIT points to an issue in studying consciousness, and that's whether consciousness is an all-or-nothing type of thing or something that comes in degrees. If one thinks consciousness is all-or-nothing, then IIT will look like it must be false since different things will have varying degrees of phi. But if one thinks having consciousness is a matter of falling somewhere on a spectrum, then IIT gives you a quantifiable framework of determining where on that spectrum a particular system falls. We seem to already talk in this way when we are trying to ascribe consciousness to non-human creatures. I am reluctant in certain circumstances to ascribe human-like consciousness to some creatures, but nonetheless wouldn’t say they lack consciousness completely. Just as well it seems perfectly conceivable that human-like consciousness isn’t the end-all-be-all of the consciousness scale and I would find it very odd that on this “pale blue dot” is where consciousness reaches its pinnacle in the cosmos. Unfortunately, we are trapped in our own particular degree of consciousness and so such a differing in spectrum doesn’t seem intuitively plausible. We would have no idea what it would be like to be a higher/lower degree of phi, just as we don’t know what it’s like to be a bat (a presumably lower system of phi according to IIT)!

One comment with regard to the part of the article that points to Searle’s objection against IIT on the grounds it uses an observer-relative notion of information. Searle is thinking of information as having some sort of semantic content, and so must be observer-relative since observers are the ones that interpret that meaning. The problem I have with this characterization of information with regard to IIT is that IIT doesn’t seem to be committed to this definition of information. As I understand it (someone correct me if I am wrong) IIT holds something like C.E. Shannon’s (1948) notion of information from his essay “A Mathematical Theory of Communication” (although some of the talk at the workshop and in some of the essays seem to have a more causal notion of information, not just differences, but differences that make a difference). As Shannon writes in that paper,

“These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages. The system must be designed to operate for each possible selection, not just the one which will actually be chosen since this is unknown at the time of design.”

I take it IIT is concerned with the “engineering problem”, and not the semantic one, just in terms of defining information. If this is the case, then IIT isn’t using an observer-relative notion of information (and thus, not consciousness) since it doesn’t require semantics to be intertwined with the notion of something that expresses a difference or a relation of differences. Information that is observer-relative in the sense Searle uses as an objection to IIT appears to be a prescriptive notion of information, which tells you what that message is supposed to mean. Whereas, the notion used by IIT looks to be descriptive, in that it just tells you what that state is, and that it is just a difference among a set of possible differences – it is just a message out of any number of possible messages (not what that message means). I suspect the “meaning” is supposed to come after and there is presumably a story to be told by IIT that explains this. It doesn’t look to me that this notion of information is susceptible to the type of objection raised by Searle. I have to say that the workshop was wonderful and I learned quite a bit about the theory! Very interesting stuff to keep thinking about. 

Scott Aaronson, MIT (from his blog): Over at Scientific American’s website, John Horgan posted an account of a workshop on Integrated Information Theory, which I attended a couple weeks ago at NYU (along with David Chalmers, Giulio Tononi, Christof Koch, Max Tegmark, and a dozen or so others).  I was the “official skeptic” of the workshop, and gave a talk based on my blog post The Unconscious Expander.  I don’t really agree with what Horgan says about physics and information in general, but I do (of course) join him in his skepticism of IIT, and he gives a pretty accurate summary of what people said at the workshop.  (Alas, my joke about my lunch not being poisoned completely bombed with the IIT crowd … as I should’ve predicted!)  The workshop itself was lots of fun; thanks so much to David, Giulio, and Hedda Hassel Morch for organizing it.

Oliver Burkeman, The Guardian: You are probably aware of [philosopher Eric] Schwitzgebel's "crazyism”--http://www.faculty.ucr.edu/~eschwitz/SchwitzAbs/CrazyMind.htm--which has the implication, as I understand it, that any correct theory of consciousness is definitely going to sound totally ridiculous.

Matthew David Segall, California Institute of Integral Studies (who did not attend workshop but commented on his blog “Footnotes2Plato”): I don’t understand ITT well enough to defend it, but I applaud the effort to make progress toward a scientifically operationalizable definition of consciousness. But it seems to me that part of the problem with all the confusion around ITT is a lack of philosophical clarity about concepts like “mind” and “matter.” So for better or worse we need more philosophy first before we can study consciousness scientifically. Otherwise we don’t even know what we’re studying. I’d echo another commenter on Horgan’s article who made the very helpful statement: “Alfred North Whitehead.”

No one has developed a more sophisticated, coherent, and adequate account of panpsychism than he. If we want to understand the conceptual lay of the land, his books Science and the Modern World, Process and Reality, Adventures of Ideas, and Modes of Thought are a good place to start. Whitehead was led to a variety of panpsychism because of his deep appreciation for the implications of quantum and relativity theory. In other words, he was led to panpsychism because of and not in spite of the best physics of his day.

Whitehead’s scheme is sophisticated enough to be able to make distinctions between classes of things like chairs and paperweights on the one hand and living cells and human beings on the other; which is to say that, for Whitehead, rocks are not conscious entities, they belong to a class of entities called aggregates that are not self-organizing and so do not possess consciousness in and of themselves (though their self-organizing components may). So Mr. Horgan, let’s please stop throwing rocks at panpsychism as though that were some kind of adequate refutation…. [For more see http://footnotes2plato.com/2015/12/06/john-horgans-article-in-scientific-american-on-panpsychism/.]

Christian List, London School of Economics: I thought I'd send you the link to my own paper that I presented at the conference--the one that looks at group consciousness through the lens of IIT (and also offers some more general remarks about IIT).

http://philsci-archive.pitt.edu/11569/1/GroupConsciousnessPhilSci.pdf