FAQs
The Method of Integrated Information Theory
These FAQs especially build on the content of the Introductory Overview and Foundations pages. If you have a question that is not yet covered by those pages (or the FAQs below), please add or upvote a new question on the comment threads at the bottom of those pages.
What do you mean by consciousness?
Ch. 1: Phenomenal and physical existence.]
Some employ redundant expressions such as “subjective experience,” “conscious experience,” “phenomenal consciousness,” and so on [1]. There is still a puzzling variety of uses of the terms conscious and consciousness. Some neurologists refer to consciousness as “awareness of the self and the environment,” which would exclude, for example, dreaming. Some philosophers and scientists refer to the ability to reflect on one’s experiences. Others would like to eliminate the term consciousness altogether and only describe certain classes of behaviors and functions.
But philosophers and scientists who are interested in the so-called mind–body problem usually accept that being conscious should be considered synonymous with having an experience—any experience: sounds and sights, colors and shapes, thoughts and emotions, will and wishes, and so on, but also simply darkness and silence. They often explain what they mean through the expression “what it is like to be.” The expression was introduced by Farrell [2] and Sprigge [3] and made popular as a definition of consciousness by Nagel [4].
Footnotes
Cite this FAQ
Why not build a theory of consciousness starting from empirical neuroscientific research?
Given the exciting advances in neuroscience, it is reasonable to ask, Why not build a theory of consciousness starting from empirical research on the brain, especially the neural correlates of consciousness (NCCs)? It goes without saying that empirical work is essential in explaining consciousness. However, IIT proposes that we start instead with introspection and reasoning to build our theory, and leave empirical work to validate the theory. There are two key reasons why “starting from the brain” will lead to an inadequate theory.
1. If we “start from the brain,” we will struggle to answer the two why questions.
As outlined here, IIT aims to answer two why questions: 1) why is experience present vs. absent? and 2) why do specific experiences feel the way they do? If we “start from the brain,” these questions will remain elusive.
First consider the question of presence vs. absence. Imagine you are a neurologist. On the hospital bed in front of you lies a patient who just had a severe stroke. Her eyes are open, scanning the room. She occasionally utters a sound, but nothing intelligible, nothing that lets you conclude whether she is conscious or whether every movement and sound are a mere reflex “in the dark.”
Since you cannot communicate with her or diagnose based on behavior, you might try to infer whether she is conscious directly from her brain using a “mind-reading” technique. In one famous case, patients were asked to imagine playing tennis or walking around their home as a way of answering yes or no, respectively [1]. For this to work, however, your patient must be able to hear the instructions, understand the task, be motivated to comply, be able to generate the mental imagery, and so on. The problem for your diagnosis is that if your patient does not complete the task, we have made little progress: when it comes to consciousness, “absence of proof is not proof of absence” [2]. Even the best techniques today only help you confirm that your patient is conscious (when report is possible and reliable). But they are unlikely to offer a principled way to infer when she is not conscious, or conscious but unable to report.
Now fast-forward a few decades. Neuroscientists have created a thorough map of the brain and uncovered all NCCs. These correlates have been confirmed extensively through patient behavior and report, so you feel confident you can use the NCC map to diagnose your patient. But can you? How can you know the map is complete? And what about that nagging problem of absence? There could always be cases in which an area is active but no report of consciousness is ever obtained. Is it right to exclude such areas, or might they be covert NCCs? Again here, if the map is built by relying on behavior and report alone, we will struggle to make inferences beyond humans—not only to unresponsive patients, but also to animals or artificial systems.
For a complete theory of consciousness, we should aim to know not just that a neural signature correlates with consciousness but why—why some neural areas and mechanisms comprise the NCCs and, just as importantly, why other neural areas and mechanisms do not. Only this way can we make strong inferences about consciousness being present or absent in cases where communication is impossible, or in cases where covert NCCs are involved.
Consider a simple analogy. Every time your child opens the refrigerator, he sees that the light is on. So based on the refrigerator’s “behavior,” he concludes that the light is always on. But then you show him the pressure switch: you push it in and, to his amazement, the light goes off! Now when he opens the door, he not only sees that the light is on but also understands why. He can now make the strong inference that when the door closes, the light turns off. Can he be sure? Strictly speaking, the only way is if he climbs into the fridge to witness the light turn off [3]. But he can also be quite sure by simply using his new knowledge of the mechanism to make a strong inference: he has reason to think the switch and light work even if the door is sealed and can’t be opened. Likewise with NCCs: only if we have a theory of why they correlate with conscious states can we make strong inferences about both presence and absence.
Let us now turn to the second why question—why specific experiences feel the way they do. How well can we answer this starting from the brain? There are ongoing content-specific NCC studies. For example, through lesion studies in particular, neuroscientists have discovered that the fusiform face area (FFA) must be in working order for a person to experience faces. Though this fact is clinically useful, knowing that specific patterns in FFA correlate with seeing faces gives us little insight as to why—why should it correlate with faces and not with, say, the feeling of nausea?
Some have brushed off such why questions, claiming that once we establish all objective correlates of consciousness, there is nothing left to explain. Others predict that such questions will become unimportant the more we learn about neuroscientific mechanisms of consciousness—that is, that the “hard problem” will fade away once we have solved enough “easy problems.”
We disagree. Mere correlates will not scratch our itch for explanation. There must be a systematic, coherent reason why FFA is a correlate of faces, V2 a correlate of low-level visual features, and Wernicke’s area a correlate of speech comprehension. In isolation, each neuron in FFA, V1, and Wernicke’s is more or less the same; so why should they specify such distinct experiences? Why should neurons in FFA allow you to “see” a face in the first place (instead of, e.g., hearing it), and why should it feel like a face instead of like middle C? There must be reasons that are both self-consistent and system consistent.
Finally, if the why questions are left unanswered, all NCCs would only relate to a single population: humans who can introspect and report their experiences. Could we ever generalize beyond that population? Could we use the NCCs to make a strong inference about whether octopuses or non-biological systems are conscious and what their experience might be like? Probably not. A theory should give us predictive power to be able to assess the presence and quality of consciousness in entities completely different from us. And this need is becoming increasingly pressing the more we progress towards creating artificial systems that exhibit behavior often associated with consciousness in humans.
2. All empirical brain research implicitly starts from experience, but IIT does so explicitly and rigorously.
Neuroscientists sometimes claim that they are working in an objective or even “theory-free” way by starting from the brain instead of from phenomenology (which some discard as “subjective”). They use reports from experimental subjects or patients as objective proxies of consciousness. For example, neurosurgeons help to reveal NCCs of specific conscious states by stimulating a certain brain area and asking the subject to report what they experience (or to perform a task). However, the only reason we trust such reports in the first place is that we ourselves know what it means to have an experience.
For example, if a patient reports, “I taste chocolate,” when the surgeon stimulates a certain brain area, we trust that that area corresponds to the NCC of chocolate taste because this report is sensible based on our own subjective experiences. But if the patient reports, “I taste exercise,” we may think they didn’t understand the instructions or that they’re suffering from, say, Wernicke’s aphasia or schizophrenia. Why? Again, because their report does not match what we ourselves know to be within the realm of standard experiences.
No matter how objective one aims to be, subjective report is a valid paradigm because researchers have experiences themselves: report can be a proxy for experience because when we experience we can typically report. In this sense, experience is the implicit basis of every empirical research program on consciousness. If research claims to “start from the brain,” therefore, it is not really bypassing subjectivity; it is rather drawing on it in an implicit—and thus incomplete—way.
IIT aims to make the experiential basis of consciousness science as explicit as possible by using introspection and reasoning to identify properties of experience (see axioms). IIT also makes extensive use of subjective report, but in the step of empirical validation—not in building the theory itself.
For more, see
FAQ: If IIT starts from experience, isn't this method "subjective" and thus unscientific?
Consciousness and the Fallacy of Misplaced Objectivity (academic paper)
In sum, empirical facts about the brain are essential to validate a theory of consciousness. As outlined in the IIT Method, IIT proposes we start not with the brain but with phenomenology, using introspection and reason to fully define our explanatory target—that is, the properties of our own experience. These properties are what need to be explained, and IIT does so by demonstrating a one-to-one identity between properties of consciousness and the physical (causal) properties of Φ-structures. This identity allows us to answer the two core why questions. To validate this core identity, empirical neuroscience is indispensable. The more thorough the validation is in humans, the stronger our inferences will be about consciousness in unresponsive patients and beyond humans altogether.
Footnotes
Cite this FAQ
If IIT starts from experience, isn't this method "subjective" and thus unscientific?
Science aims to explain regularities in the world in objective terms. An explanation is usually understood as objective if it is not influenced by value judgements or the whim and inconsistencies of an individual scientist [1]. Needless to say, objectivity in science has been extremely successful. IIT too aims at an explanation of consciousness in objective terms that can be validated scientifically. The key question here, however, is not about how things are explained (the explanans), but about what needs to be explained (the explanandum). In this sense, the science of consciousness is unique [2].
In all other areas of science, the explanandum—the thing to be explained—can and should be presented in objective terms, whether it’s the properties of subatomic particles, biological systems, or galaxies. In the science of consciousness, however, the “object” of study is subjectivity itself. In other words, consciousness is literally the one explanandum that, by definition, is not objective.
So how should we proceed? Many have tried to sidestep this challenge by swapping in objective proxies for subjective experience—usually in the form of neural, behavioral, or functional correlates of consciousness. For example, these approaches might set out to explain a visual function, such as seeing a stimulus appear on a screen and fixating it. They may proceed to “explain” this by identifying the brain areas and circuits involved in detecting the stimulus and moving the eyes. But this explanation, then, is an account of a behavior or a function, not of the fact that a human fixating a stimulus would first and foremost “see” it—experience it at a location in their visual field. It feels like something to see the stimulus, and this feeling also needs to be accounted for [3]. Again, if that feeling is swapped out with an objective proxy (e.g., fixating), we get an explanation of that proxy, not of an experience. Functionalist accounts are valuable and worth pursuing, but we must be clear that they belong to the science of cognition at large, not of consciousness per se.
To define consciousness as a scientific explanandum, IIT considers first-person experience as the main source of knowledge. We must use the “first-person tools” of introspection and reasoning to identify both essential properties of experience (the axioms) and the accidental properties of specific experiences (e.g., of the feeling of space). These properties form our explananda—that is, they are the properties we then try to account for in objective terms.
Might IIT’s axioms be incomplete? Yes. Might my description of, say, the feeling of space be inadequate? Yes. For this reason, we must be clear that “starting from subjectivity” is not at all the same as “being subjective.” Consider the axioms; they can only be interrogated and discovered through the introspection and reasoning of an individual. Yet that does not make them “subjective.” On the contrary, they should be confirmed intersubjectively: I should check them against my experience, you against yours, and everyone else against theirs. In the case of IIT, the five axioms are based on over two decades of this intersubjectively objective process involving dozens of people. The axioms of IIT may turn out to be incomplete, but the method of starting from phenomenology is sound—indeed, it is indispensable to lay bare what we aim to explain in the science of consciousness.
Why not start from the reports of experimental subjects? Isn’t this an easy way to identify our explananda objectively from the start? This oft-raised question reveals an ironic oversight: the only reason we trust third-person report is because we ourselves have first-person experience (see FAQ: Why not build a theory of consciousness starting from empirical neuroscientific research?). If a person reports regular hallucinations, we diagnose a disorder; if they report hearing music when presented with a visual stimulus, we diagnose synesthesia; and if they report seeing things as we do, we diagnose normal and healthy vision. Third-person data only makes sense because we know what it is to experience something—from our own, first-person perspective. In this sense, IIT can be seen as taking its methodological first step explicitly and thoroughly, while most take the same step implicitly and thus incompletely. Only on this basis can we then use third-person report systematically and rigorously. And indeed, third-person report is essential in the scientific validation of IIT.
In conclusion, we have seen that because consciousness is subjectivity itself, we cannot explain it scientifically if we start by abstracting subjectivity away or substituting it with functional or behavioral proxies. Rather, we must use subjective means to articulate the properties of experience that need to be explained; we then verify these properties intersubjectively against the experience of others. Once this explanatory target is clear, we then proceed to explain it in objective terms and to validate that explanation empirically.
To see this argument in an academic paper, see Ellia et. al. (2021), Consciousness and the fallacy of misplaced objectivity. Neuroscience of Consciousness 2021, no. 2: niab032.
Footnotes
Cite this FAQ
What constitutes a “good explanation” of consciousness, according to IIT?
Challenges abound in studying consciousness scientifically. They begin with characterizing consciousness itself, which is susceptible to all sorts of limitations of introspection, memory, report, and so on. Challenges also include the myriad difficulties of isolating and measuring neural signatures of conscious states.
To overcome these hurdles, the science of consciousness needs to bootstrap a satisfactory explanation by using any promising method and source of evidence available—empirical studies, philosophical analysis, introspection, reason, computer modeling, trial and error, and so on. As a guide in this process, we can consider seven S’s of what a good overall explanation should look like [1]. Each is illustrated below using both a toy explanation that “water is H2O” along with a teaser of what the given point means in IIT’s approach to accounting for consciousness scientifically:
Scope: The explanation should account for a broad set of facts.
The explanation of water doesn’t only account for the properties of liquid water; it also accounts for freezing, thawing, evaporation, sublimation, etc.
Likewise, IIT aims to account for consciousness vs. unconsciousness in every behavioral state (wakefulness, sleep, seizures, anesthesia, etc.). It also aims to account for the content of consciousness in unaltered states (e.g., all sensory modalities; the experience of extendedness and the flow of time, of conceptual categorization, of emotions, of thinking, etc.) and in altered states (e.g., during meditation or dreaming, or under the effects of psychedelics).
Specificity: It should explain facts precisely.
The explanation of water can be used to explain, for example, the fact that water has the particular polarity it has. Similarly, the electrochemical properties of its constituents—two hydrogen atoms and one oxygen atom—can be used to account precisely for why a stream of water can be bent by electrostatic forces and why salts can be dissolved in it.
Likewise, IIT aims to explain facts precisely—for example, why our experiences are associated with particular regions of neocortex and not others, or why the perception of a red apple feels “red” and “apple-like” (and not, say, like the sound of a bell).
Self-consistency: It should be internally coherent.
We cannot have one explanation that applies only to liquid water and another only to gas. Rather, the H2O explanation accounts coherently for behavior within and across states.
Likewise, we cannot have one explanatory framework that applies only to dreaming experiences and another only to waking experiences. The explanation should rather account for any experience (regardless of behavioral state) in an internally coherent manner. Moreover, this self-consistent explanation mustn’t lead to inconsistencies or contradictions: while the explanatory framework must be consistent, the details of how we account for spatial extendedness, for example, cannot be recycled to account for, say, the feeling of visual objects.
Synthesis: It should account for disparate facts in a unifying manner
The molecular explanation of water helps explain very disparate phenomena—for example, its polarity, its capacity to dissolve salts, its surface tension, its refraction patterns, etc.—in a unifying framework.
Likewise, by using the the same set of axioms and postulates, and the same mathematical framework, IIT is able to account for broad scope facts in an unifying way—for example, for why the cortex is likely to be the neural substrate of consciousness in humans (as opposed to the cerebellum), why the richness of our experiences seems to fade in deep sleep, and why visual experiences appear spatially extended.
System consistency: It should be coherent with our overall view of things.
The chemical explanation of water as H2O is consistent with physical theories describing the motion of liquids, on the one hand, and with biological theories describing the function of water in biochemical reactions on the other.
Likewise, IIT’s account of consciousness in terms of cause–effect power is consistent and continuous with our understanding of physics and biochemistry, on the one hand, and with systems neuroscience and psychology on the other.
Simplicity: It should be simpler than alternatives.
The explanation of water in terms of kinetic-molecular theory alone is quite parsimonious, which is preferable to a theory that would have to invoke additional “laws of hydrology” to explain the same phenomena.
Likewise, IIT accounts for consciousness in terms of cause–effect power alone—defined strictly in operational terms of interventional conditional probabilities, with no additional ingredients. If this account suffices, it is preferable to one that would have to invoke, say, “strongly emergent properties” of different kinds for every mode of experience.
Scientific validation: It should make testable predictions and explain scientific facts.
It is possible to predict (or explain) that the boiling point of water decreases with elevation, and to demonstrate this through empirical tests.
Likewise, IIT not only offers explanations of well-known, disparate neuroscientific facts, but also makes falsifiable predictions that can be tested in human subjects. Among these are also counterintuitive predictions—for example, that silent neurons contribute directly to experience.
In sum, a good explanation of consciousness will not only account for the two why questions but will do so in a way that satisfies these areas of a good explanation. These seven S’s will return in the section on the fundamental identity of IIT, assessing the degree to which the theory fulfills them as an overall good explanation of consciousness.
For more, see Identity as a good explanation.
Footnotes
Cite this FAQ
If introspection requires reflection, how can we be sure that non-reflective experiences exist at all?
Ch. 1: Phenomenal and physical existence.]
The experiences that serve as the starting point for ontology are typically reflective experiences, which require abstract concepts as well as the ability to introspect, reason, remember, compare, and so on. For example, take the experience of seeing the blue sky. To convince myself that the existence of my experience is immediate and irrefutable, I start by introspecting. First, I focus my attention on the fact that I am seeing the sky and that it appears blue. I may then ask myself questions, such as whether I can doubt that I am seeing the blue sky. I reflect for a moment and come to the answer, which I report to myself: my experience of the blue sky is immediately present—it is not something I must infer—and I cannot refute its existence. I also reflect that it would be inconceivable that I could experience anything—another sight or thought, say—without that experience also existing.
To draw such conclusions, I must at least be able to conceive of abstract, highly general concepts, such as experience and existence, and then to attend to those concepts, keep them in working memory, reason about them, and report to myself. I may also need the ability to imagine and evaluate alternatives, to generalize, to extrapolate to other possible experiences, and to refer to myself as the subject of these experiences. Clearly, Descartes could do all the above, but a baby, or somebody heavily drugged, likely could not.
The ability to conceive abstract concepts and reflect on experience is thus a prerequisite for understanding and assessing the axioms of phenomenal existence. However, it is most plausible that experiences can exist whether or not they involve such concepts and reflection. Even though Descartes had started from the active, reflective experience “I think” to establish beyond doubt that experience exists, by his Third Meditation, he had generalized his conclusion: “I am a thinking thing, that is, a thing that doubts, affirms, denies, knows little and ignores much, wills and refuses, imagines and feels.” In other words, he concluded that passive, non-reflective experiences, such as “I feel,” also exist.
William James, who considered the existence of experience to be the foundation of psychology, also began with thought: “The first fact for us, then, as psychologists, is that thinking of some sort goes on.” Just like Descartes, James hastened to add, “I use the word thinking […] for every form of consciousness indiscriminately.” Moreover, he made it clear that he was not referring to himself as an “I” above and beyond the experience: “If we could say in English ‘it thinks,’ as we say ‘it rains’ or ‘it blows,’ we should be stating the fact most simply and with the minimum of assumption. As we cannot, we must simply say that thought goes on” [1].
But can I be equally certain about the existence of experiences that are not reflective? Can I be sure that I was seeing the blue sky, before I started paying attention to it and realized that I was seeing it? Some thinkers have been seduced by the argument that the existence of non-reflective experiences cannot be proven [2]. Take Locke’s statement, “Consciousness is the perception of what passes in a Man’s own mind” [3], which seems to suggest that the mere “passing through the mind” may not be conscious, or at least not until it is noticed. An extreme version of this idea is that I might only have an experience when I think about it. To see a possible intuition behind this stance, consider the following sequence of thoughts: I think that I am currently seeing the blue sky. But did I actually have the visual and spatial experience of the blue sky before I started thinking that I was experiencing it? Perhaps not, for how can I rule out an illusion of memory? That is, while I may have the impression that I did catch a glimpse of the blue sky before I started thinking about it, perhaps only my current impression is real, while the memory is illusory. In other words, the existence of non-reflective experiences can indeed be doubted.
Strictly speaking, only reflective experiences exist beyond doubt, but it is most plausible that non-reflective experiences also exist. Indeed, the existence of non-reflective experiences should be considered an inference to a good explanation from within a reflective experience, just like the existence of dreams, of other minds, and, ultimately, even of the external world. The idea that we do not experience anything unless we think about it is a paramount example of inference to a bad explanation, or at least of silly extremism: it means dismissing a hypothesis that is highly plausible, based on inferences to a good explanation, in favor of one that is highly implausible, simply because truth cannot be proven beyond any doubt [4]. For example, the hypothesis that I was experiencing the screen the whole time I was writing seems much more plausible than the alternative hypothesis that I had no experience whatsoever—that the light of my consciousness only turned on when I stopped writing and started thinking that I am actually seeing the screen. For these reasons, the 0th axiom is broadly put as “experience exists” rather than as “reflective experience exists” [5].
Footnotes
Along similar lines, it has been remarked that Descartes would have been more correct had he rather said something like, “thinking is occurring, therefore something exists."[2] For example, it lies at the heart of “higher-order” theories of consciousness, which explicitly propose that an experience comes into being only when it is reflected upon. Also, many psychologists argue that there cannot be consciousness without attention, or even that attention and consciousness are the same thing, implying again that only reflective experiences are conscious [...].[3] Locke, John. (1869) An Essay Concerning Human Understanding, Volume II, 1, §19, 115. Locke and Phemister 2008. [4] Another example of inference to a bad explanation is the belief that only my experience exists. Most people reasonably believe that the "external world" continues to exist whether or not they close their eyes, fall asleep, or die. They also believe that other people have experiences. However, a few philosophers have pointed out that, strictly speaking, neither belief is immune to doubt, thus embracing silly solipsism.Most people also tend to believe that when they wake up from sleep and remember a dream, it is because they were actually experiencing it. However, some philosophers (and some occasional thinkers) have pointed out that this belief, too, can be doubted. Accordingly, it is conceivable that, throughout sleep, we lay down memories while we remain fully unconscious, and that we become conscious of them only when we wake up and report them. This example of inference to a bad explanation can be found in Malcolm 1956. Dennett (1976) follows a similar argument and concludes that both alternatives are equally likely, an example of false equivalence.[...] Neuroscientific evidence about the neural substrate of reflective and non-reflective experiences makes the inference more plausible not just at face value but also empirically. For example, we have already discovered that when I see a face, imagine a face, or dream of a face, the same “face area” is activated in my brain (and not when I do not experience it), irrespective of whether I am reflecting about the face or whether it is a target for some task. Now assume that we further discover that when I am reflecting on something, a “thought area” is activated, irrespective of what I am thinking about. Similarly, when I keep in mind a target for some task, a “task area” is activated, irrespective of whether the target is a face, a place, or any other object. In view of such discoveries, it would be far more plausible that the face area supports the experience of the content face, whether or not I am thinking about it or treating it as a target in some task.Malcolm, N. (1956). Dreaming and skepticism. The Philosophical Review, 65(1), 14–37.Dennett, D. C. (1976). Are dreams experiences? The Philosophical Review, 85(2), 151–171.
Cite this FAQ
Why is the perturbational approach so important in IIT?
The phrase “correlation is not causation” is so common it has become a cliché. But it points to the fact that we need something more than mere observations and data analysis to get at how the physical world works causally. Just as randomized controlled trials are the gold standard in medical research, a meticulous perturbational approach is essential in IIT. This means that we must manipulate the system we are studying in a controlled way, and observe and record the results to isolate the causal interactions.
One general reason to do this is to reduce the influence of statistical dependencies (correlations) in the system, which might otherwise inflate the estimates of causal interactions. In IIT, however, there is a greater reason for emphasizing the perturbational approach: the entire theory is built on the assumption of the physical defined operationally in terms of cause–effect power (see Three Methodological Assumptions). Hence, for IIT’s parsimonious methodology to work, we must be sure we’re truly isolating the causal interactions as precisely as possible.
In principle, IIT aims to obtain an exhaustive understanding of the causal structure of physical systems. At the “atomic” grain—operationally defined as the smallest spatial temporal grains we could possibly observe and manipulate—we must set the system of interest into its possible states and observe the result, while holding everything outside the system constant in its present state. We must repeat this systematically, until we have a reliable estimate of the full transition probability matrix (TPM) of the system. This methodology will, in principle, provide a complete description of how the system transitions from any state to any other state at the given graining (i.e., how its state will evolve given that the background conditions do not change).
A TPM forms the basis for the rest of the IIT methodology for assessing the cause–effect power of a substrate and unfolding the cause–effect structure it specifies. However, because the full perturbational approach is untenable in practice [1], acquiring the TPM of a system requires certain informed assumptions (see FAQ: How do we get a TPM?).
For more, see
Perturbational complexity index (an empirical method inspired by the perturbational approach of IIT) [1]
Footnotes
Cite this FAQ