FAQs

The Postulates of Integrated Information Theory

These FAQs presume familiarity of the postulates of IIT. If you have a question about the postulates that is yet not covered here, please add or upvote a new question on the comment thread of the respective page: Intrinsicality, Information, Integration, Exclusion, Composition.

Why are the postulates so important in IIT?

The postulates are a cornerstone of the IIT method: they formulate the axioms in operational terms and are the basis on which the mathematical formalism is developed, thus providing a principled basis for understanding whether and to what degree a substrate is conscious, and the particular way it is conscious (i.e., “what it is like” to be that substrate). The postulates are not only the lynchpin for IIT’s analysis of substrates; they have also spurred a number of empirical, theoretical, and technological developments with applications both within and beyond the quest to understand consciousness. Some of these developments have fed back on the theory itself, while others have shed light on adjacent areas of research. A few examples are presented below. 

Explaining facts about the brain 

The postulates help explain well-known facts about the relation between the brain and consciousness. As an example, neurological evidence indicates the cortex to be much more strongly associated with our everyday consciousness than the cerebellum. This holds despite the fact that the cerebellum has roughly four times more cells than the cortex, and that the cerebellum has one of the most complex cell types in the brain (the Purkinje cells).

IIT explains this by comparing the neuroanatomy of the two regions. The cortex is highly integrated in a way that allows not only for high values of integrated information but also for causal composition (structure) at several levels of description. The cerebellum, with its more modular connectivity, appears to be less integrated and is therefore unlikely to be the substrate of consciousness. Thus, applying a combination of postulates (here, integration, composition, and exclusion) provides a principled explanation for why the cortex rather than the cerebellum is associated with consciousness in humans. Of course, such an explanation must also be validated empirically, but the point is that the postulates offer a rigorous guide to developing principled conjectures to be tested.

For more, see Empirical Validation of IIT. 

Objective measures of consciousness in humans 

At present, the IIT method can only be applied to very simple systems composed of discrete and binary units. Still, the postulates can be (and have been) used to inspire measures and heuristics to test for the presence vs. absence of consciousness. The most prominent example is the perturbational complexity index (PCI). It is designed to give “high” readings only for systems that are simultaneously integrated yet differentiated—a metric directly inspired by the integration and information postulates. This measure has been very successful in distinguishing between conscious and unconscious states in human subjects [1]. 

For now, measures such as PCI are only rough approximations of integrated information and in no way capture the quality of experience—a central focus of IIT. Nevertheless, PCI is already being used to more precisely diagnose patients with disorders of consciousness (albeit experimentally). And once developed further, such measures might be useful in providing evidence for the presence of consciousness in newborns, non-human animals, “brain organoids,” or even artificial systems.

For more, see Empirical Validation of IIT.

Inferences about consciousness beyond humans

The postulates of IIT guide us first in accounting for the neural substrate of consciousness in humans. But there is no reason to stop there. Since the postulates are based exclusively on cause–effect power (0th postulate), it is reasonable to infer consciousness to be present in any substrate organized in line with the postulates.

The postulates suggest that consciousness may be associated with the nervous systems of many animals, big and small; that it may feel like something to be a plant, with no neurons at all; and that robots could be built not only with intelligence but also consciousness, but only if they are wired correctly [2]. If we fully dive through the postulate looking-glass, we might even find that subatomic particles have a minute amount of consciousness (but only if they don't form part of a larger conscious system, as per exclusion). 

Before leaping to conclusions like “everything is conscious,” it is important to note that the predictions of IIT are sometimes counterintuitive yet highly principled and specific. According to IIT, only physical systems that fulfill all the postulates will be conscious. This means that some systems that appear unconscious may in fact be conscious, while other systems that we intuitively assume to be conscious might not be. To find out which systems pass the bar of IIT’s predictions, we need to run them through the gauntlet of the postulates. 

Principled approaches to perennial problems in philosophy

The postulates of IIT allow us to analyze systems not only from the extrinsic perspective of an observer but also from the intrinsic perspective of the system itself. This change in perspective allows IIT to offer a novel approach to a number areas of inquiry, including existence, causal composition, emergence, individuality, intrinsicality, causality, agency, free will, meaning, knowledge, and consciousness itself. The postulates of IIT are what allow the theory to weigh in on these perennial debates using a rigorous and quantitative framework. 

To give one example, IIT’s defense of free will is summarized by the title of the paper “Only what exists can cause.” Arguments against free will are usually based on certain ontological assumptions—that our neurons are what “truly exist,” while our experiences are merely functional states that “come along for the ride.” In IIT, the postulates are the necessary and sufficient conditions for physical existence; they are thus our guide to determining what truly exists and what can truly cause.

For more, see FAQs: IIT & Philosophy

Footnotes

[1] Casarotto et al. (2016). Stratification of unresponsive patients by an independently validated index of brain complexity. Annals of Neurology, 80(5), 718–729.[2] Findlay G, Marshall W, Albantakis L, Mayner WGP, Koch C, Tononi G. Dissociating Intelligence from Consciousness in Artificial Systems – Implications of Integrated Information Theory. In: Proceedings of the 2019 Towards Conscious AI Systems Symposium, AAAI SSS19; 2019 and forthcoming.

Cite this FAQ

Juel, Bjørn Erik, Jeremiah Hendren, Matteo Grasso, and Giulio Tononi. "FAQ: Why are the postulates so important in IIT?" IIT Wiki. Center for Sleep and Consciousness UW–Madison. Updated June 30, 2024. http://www.iit.wiki/faqs/postulates.

What is the meaning of cause–effect power in light of other notions of “causal power”?

In everyday terms, when we talk about “power,” we often refer to what something is doing: the river is so strong, so “powerful,” that it is breaking the dam. This sense indicates a property the river has right now. Another sense of “power” takes into account what things could do in different circumstances: the river has the power to break the dam even when the current is weak because it can get strong with the next heavy rain. In this second, everyday sense, power has a counterfactual nature: something has the capacity to do X even when it is not actually doing it. 

This second sense of power has been refined in philosophical debates about causation. Since Aristotle, philosophers have categorized properties such as fragility, solubility, malleability, etc. as “causal powers” or “dispositions,” and talked about them in terms of potentiality (X has the power to Y when condition Z applies) and actuality (now that condition Z applies, the power to Y is actualized). For example, a plant has the power to grow when light and water are present (potentiality), and it has this power even when no light or water are present. Moreover, this power manifests if light and water (together with all other necessary factors) enable the plant to photosynthesize and thus grow (actuality). 

Philosophers have also distinguished “active power” from “passive power.” If we tried to disentangle them in the growth example, we might say that the plant has the active power to create sugar through photosynthesis, and it has the passive power to be killed by drought.

The notion of cause–effect power in IIT has a partial connection to the philosophical notion of causal power above, but note the following specifics of IIT’s notion:


Each of these points is elaborated below.


1. IIT aims to be mathematically precise by characterizing causal powers using conditional probabilities in terms of TPMs.

IIT assumes that to discover causal interactions—and not mere correlation—a perturbational approach is necessary. This means we repeatedly set all units of a system in all possible states, and we observe the probability of the system’s possible output states—each of which is an “occurrence” across two time steps. We then compile the results as a transition probability matrix (TPM) (See FAQ: How do we get a TPM?). This approach starts from the assumptions of realism, physicalism, and atomism (see Three Methodological Assumptions), and works well for well-defined, discrete systems.

Recall that our main interest in IIT is to capture the cause–effect power of the substrate of consciousness (i.e., a brain). Brains have clear constituents that are natural candidates as units (e.g., neurons or minicolumns); these also have definable states (e.g., firing or not firing) and can, in principle, be manipulated by setting them in all possible states (e.g., we could patch-clamp any number of neurons in the brain to whatever state we please). In practice, the exhaustive perturbational approach is not tenable in the brain; however, these parameters can be characterized rigorously and, most importantly, modeled based on simple systems of discrete, probabilistic logic functions (e.g., logic gates). 

IIT’s approach using perturbation and TPMs would be quite difficult for the paradigmatic cases of causal powers discussed in philosophy. For instance, a plant’s “power to grow” involves many time steps and big changes in the constitution of the plant itself (e.g., from a seed to a full tree). Such general notions of powers are hard to capture mathematically because the variables are too macro, and states and manipulations cannot be defined rigorously: what is the “state” of a plant, and how should we manipulate or model that state? Likewise, to obtain the relevant conditional probabilities, how would we “reset” the state of the tree once it has grown, or the state of the river once it has broken the dam?

Other possible contexts in which IIT’s approach can work are idealized quantum systems [1] or circuits of logic gates. In these scenarios, we can precisely define the variables, states, and manipulations, and—based on these choices—obtain a complete description of the powers at play (which, in IIT, is then the basis for the search for consciousness in these systems).

2. IIT employs a unified mathematical framework to account for both “passive” and “active” powers—in IIT, cause power and effect power

The same methodology can be used to describe passive and active power, which in IIT are respectively termed cause power and effect power. We are checking for effect power if we ask, say, “Does A being ON right now change the probability of B being ON in the next time step?” And we are checking for cause power if we ask, say, “How does A being ON in the previous time step change the probability of B being ON right now?” These powers may be symmetric, but in IIT’s analyses, one direction is often more selective because of either overdetermination or degeneracy (For a more technical explanation, see Computing Φ: Step 3 Compute intrinsic information). 

IIT analyzes cause power and effect power in separate operational steps. However, insofar as these powers together characterize what a system is in physical terms, they are treated as the unified concept of cause–effect power.  

3. IIT is especially interested in systems that have causal power upon themselves—that is, intrinsic cause–effect power.

IIT is especially focused on accounting for consciousness in physical terms. Since consciousness is intrinsic, our analysis of cause–effect power also strives to be intrinsic—concerning the causal powers a system has upon itself.

This notion is not wholly foreign to philosophy and biology, which recognize that some systems are unique in that they affect themselves by showing self-regulatory properties or self-organization (usually to maintain homeostasis), which has been recognized as the hallmark of living organisms. Similarly, though the brain is connected to the environment through inputs, its future states highly depend on its own previous states. For instance, when we are dreaming, we are almost completely disconnected from the environment and most of the experience can be accounted for by mechanisms in the brain affecting one another, independently of external stimuli.

To capture intrinsic cause–effect power, IIT applies the same operational methodology described above, but focusing on powers within the system. This results in a “square” TPM (such as the example here of AB over AB), in which both rows and columns include the same units, capturing whether and how each takes/makes a difference from/to any other. 

Importantly, this is a way to operationally characterize the intrinsic cause–effect power of the system—power that the system has over itself. This intrinsic TPM is the basis of the entire “operational toolbox” of IIT, through which we check that the cause–effect power of a system is intrinsic, specific, integrated, definite, and structured (following the postulates). 

4. In IIT, actualized powers are not “what happens” but rather “what exists”—a cause–effect structure of causal distinctions and relations (a Φ-structure). 

As described above, to figure out what truly exists (the actual), we have to probe what could exist (the potential, as various counterfactuals). This counterfactual reasoning is captured in the TPMs and repertoires that we use in IIT’s operational toolbox. However, experience is not potential but actual—in the sense of being “right here, right now.” Hence, to account for experience in physical terms, the cause–effect power of the substrate of consciousness must also be actual. It’s not enough to describe a repertoire of counterfactual causes and effects; we must rather choose a specific cause state and effect state to characterize the substrate in its actual state (see information postulate).

Hence, in IIT, we say the actual is the potential to express that what exists is the actualized powers of a substrate—the cause–effect structure it specifies. What this structure is (actually) is specified by a single (potential) cause state and single (potential) effect state. These are potential not only because they are one among many counterfactuals but also in that they may or may not “really” occur in the next time step (or may not “really” have occurred in the previous one) [2].

This notion of actualized cause–effect power marks a key difference from the general notion of “actual” or “manifested” causal powers in philosophy, which describe “what happens” (e.g., the river is breaking the dam). In IIT, instead, actualized cause–effect powers are not “what happens” or “what something is doing,” but rather what exists.

Footnotes

[1] Albantakis, et al. (2023). Measuring the integrated information of a quantum mechanism. arXiv preprint arXiv:2301.02244.[2] As described in the information postulate, these cause states and effect states are those that maximize the intrinsic information of the substrate in its current state. 

Cite this FAQ

Grasso, Matteo, Jeremiah Hendren, Bjørn Erik Juel, and Giulio Tononi. "FAQ: What is the meaning of cause–effect power in light of other notions of 'causal power'?" IIT Wiki. Center for Sleep and Consciousness UW–Madison. Updated June 30, 2024. http://www.iit.wiki/faqs/postulates.

Does the intrinsicality postulate mean that the substrate of consciousness is not influenced by its environment?

Following the intrinsicality postulate, intrinsic cause–effect power is what matters to account for consciousness. In other words, we must look at how the substrate of consciousness causally constrains itself to explain why a given experience feels the way it does. This does not mean, however, that the environment has no influence on what it is like to be that substrate. On the contrary, the influence is great, but we must be precise about how this influence plays out within the substrate.

The environment influences the substrate of consciousness—and its intrinsic cause–effect powers—in at least three major ways. First, through evolution, development, and learning, causal processes in the environment contribute to shaping the connectivity among the units of the substrate (see matching). Second, certain aspects of our environment—those that we have evolved to detect—will generally act as strong triggers for subsequent states of the substrate of consciousness. And, third, the current state of background conditions will place constraints on the causal powers the substrate of consciousness can have over itself.

This third point may seem strange: how can the state of something outside the system have an impact on the intrinsic cause–effect power of the system? This confusion stems from our extrinsic view of the system. When analyzing the system from the outside, we know that the state of background units matters. However, when we apply the intrinsicality postulate, we try to take the intrinsic perspective of the system and ask what the system “knows” about itself (figuratively speaking). 

Consider a very simple system of three units I, B, and O. Let’s say we have reason to believe B alone is a complex, so we consider it as our first candidate (dashed blue border). This means that units I and O are (at most) background conditions. Now we can repeat the question more precisely: How do I and O matter for the intrinsic cause–effect power specified by B, if at all? O has no influence on B, but unit I provides an input to B, so it may influence the way B affects itself. Let’s see how that could work by inspecting the (effect) TPM of system B in two different contexts. 

If unit I is inactive (“pinned” OFF), B’s TPM looks like this (left): it is 80% likely that B will stay in its current state, and 20% likely its state will flip. But if unit I is constantly ON (“pinned” ON, right TPM), this affects the probabilities—B will almost certainly be ON, regardless of its current state. Since the TPM is the basis for assessing the intrinsic causal powers of a system, it seems like the state of unit I really does matter for the power B has over itself. 

The same logic applies to the larger network ABCDIO used throughout the wiki. Units I and O clearly interact with system ABCD, but what we are interested to capture is the way they influence the cause–effect power ABCD has upon itself [1].

In sum, in IIT the environment in which a system is embedded is highly relevant. While the system’s intrinsic causal powers are what matter in accounting for what it is like to be that system, these intrinsic powers depend greatly on the environment—both in the present, in the immediate past, and over the course of learning and evolution.

Footnotes

[1] Note that background conditions can also affect what units are included in the complex altogether. In other words, if strong enough, background conditions can “steal” units out of the complex. For example, if B were to behave like a deterministic OR gate and unit I were always ON, B would turn ON with probability 1, and it thus could not “take any difference” from other units within the candidate system. This would mean that B could not form part of the complex in question (more precisely, the candidate complex including B would be reducible). In this case, B itself would become a background condition for the complex (presumably ACD). But if unit I then turned OFF, and no longer determined B’s state, B might “rejoin” the complex.

Cite this FAQ

Juel, Bjørn Erik, Jeremiah Hendren, Matteo Grasso, and Giulio Tononi. "FAQ: Why are the postulates so important in IIT?" IIT Wiki. Center for Sleep and Consciousness UW–Madison. Updated June 30, 2024. http://www.iit.wiki/faqs/postulates.

How is information in IIT different from "Shannon information"?

This FAQ is under development. For the time being, here is the abstract of Zaeemzadeh & Tononi (forthcoming):

"Information theory, introduced by Shannon, has been extremely successful and influential as a mathematical theory of communication. Shannon’s notion of information does not consider the meaning of the messages being communicated but only their probability. Even so, computational approaches regularly appeal to “information processing” to study how meaning is encoded and decoded in natural and artificial systems. Here, we contrast Shannon information theory with integrated information theory (IIT), which was developed to account for the presence and properties of consciousness. IIT considers meaning as integrated information and characterizes it as a structure, rather than as a message or code. In principle, IIT’s axioms and postulates allow one to “unfold” a cause–effect structure from a substrate in a state—a structure that fully defines the intrinsic meaning of an experience and its contents. It follows that the communication of information as meaning requires similarity between cause–effect structures of sender and receiver."

There have been many proposals for calculating Φ. Why should we consider one to be the “right” measure?

The first formal measure of integrated information (Φ) in the context of IIT was published in 2004, building on the “neural complexity” measure of the dynamic core hypothesis (Tononi & Edelman 1998). Since then, the measure of Φ (and φ) has evolved substantially in the Tononi lab, and many alternative measures have been proposed outside of the lab, along with approximations to crudely estimate Φ in real brains. Some have criticized these developments, claiming that the updates in the “official measure” and the proliferation of approximations mean that Φ is arbitrary, underconstrained, or untestable. This view is mistaken for a variety of reasons.

First, the theory is a work in progress, and one of the main avenues of research over the past two decades has been to find the “right” measure of Φ. The “right” measure should be thought of as the one that best formalizes the essential properties of the substrate of consciousness (the postulates), which aim to express the essential properties of experience (the axioms) in physical terms. 

The developments leading to the present mathematical formalism (IIT 4.0, Albantakis et al. 2023) should be viewed as progressive attempts to achieve this. Only in IIT 4.0 are all axiom–postulate pairs presented in their mature form, which is operationalized in the Φ (and  φ) measures. Most importantly, IIT 4.0 includes the intrinsic information measure—the only information measure that is causal, intrinsic, and specific (following the existence, intrinsicality, and information postulates; Barbosa 2020). As a consequence, the notion of maximal integrated (intrinsic) information—at the level of both systems and mechanisms (φs and φd)—is now also more rigorously formalized (Marshall et al. 2023). Finally, IIT 4.0 also offers an explicit account of causal relations, which allows the Φ measure to also operationalize the composition postulate.

What about the many alternative ways to calculate Φ? (See, e.g., Mediano et al. 2019 for a comparison of five measures.) Until now, all alternative measures have focused only on capturing the co-presence of integration and information (largely understood as “functional segregation”). In layman’s terms, they have tried to measure the degree to which the brain paradoxically acts as a whole but with parts that behave independently at the same time. In this sense, some might argue that they are alternatives to the Φ measures of IIT 1.0 (Tononi 2004) or 2.0 (Tononi 2008, Balduzzi & Tononi 2008), which also focused on the integration and information postulates (and drew on standard, extrinsic information measures). However, the alternative measures are certainly not comparable to the measures in IIT 3.0 (Oizumi et al. 2014) and 4.0 (Albantakis et al. 2023) because they only even attempt to account for two of the postulates.

Most notably, alternative measures largely treat information in a statistical and extrinsic, observer-oriented sense (like “Shannon information” measures). A proper alternative would have to account for information in both a causal and an intrinsic sense (not to mention structured and definite sense) [1]. While we applaud the initiatives and hope that more will follow, the only true “alternatives” to Φ will be those that capture all axiom–postulate pairs. (Also see FAQ: How is information in IIT different from "Shannon information"?).

What about approximations of Φ in the brain? Computing Φ is notoriously difficult, even for tiny “toy” systems. Therefore, approximations and heuristics have to be applied to test and apply the principles of IIT in real systems, beginning with the human brain. Like the alternative Φ measures, the various Φ approximations have all aimed to measure the co-presence of integration and information in neural activity (see Sarasso et al. 2021 for a historical overview). Again here, the idea of information has been approximated through proxy notions of differentiation or functional segregation in brain activity. 

These approximations have certainly served a purpose—both in confirming that the core intuitions of IIT are on the right track and in developing clinical applications. Most notably, the perturbational complexity index (Massimini et al. 2009; Casali et al. 2013) was inspired by IIT and is evolving into a reliable bedside metric of the presence vs. absence of consciousness. 

However, like all proposed Φ alternatives, Φ approximations are greatly underconstrained in that they capture only two of the six axiom–postulate pairs, and by proxy at that. Approximations indeed serve a purpose—and should be encouraged—but they, too, should not be mistaken as alternatives to a proper measure of Φ. 

Footnotes and sources

[1] Recall that to capture causal information, a perturbational approach is necessary. Many alternative Φ measures rather rely on empirical or observational probability distributions, which make measures correlational rather than causal. Sources:Albantakis, L., Barbosa, L., Findlay, G., Grasso, M., Haun, A. M., Marshall, W., ... & Tononi, G. (2022). Integrated information theory (IIT) 4.0: formulating the properties of phenomenal existence in physical terms. arXiv preprint arXiv:2212.14787.Balduzzi, D., & Tononi, G. (2008). Integrated information in discrete dynamical systems: motivation and theoretical framework. PLoS computational biology, 4(6), e1000091.Barbosa, L. S., Marshall, W., Streipert, S., Albantakis, L., & Tononi, G. (2020). A measure for intrinsic information. Scientific reports, 10(1), 18803.Casali, A. G., Gosseries, O., Rosanova, M., Boly, M., Sarasso, S., Casali, K. R., ... & Massimini, M. (2013). A theoretically based index of consciousness independent of sensory processing and behavior. Science translational medicine, 5(198), 198ra105-198ra105.Marshall, W., Grasso, M., Mayner, W. G., Zaeemzadeh, A., Barbosa, L. S., Chastain, E., ... & Tononi, G. (2023). System Integrated Information. Entropy, 25(2), 334.Massimini, M., Boly, M., Casali, A., Rosanova, M., & Tononi, G. (2009). A perturbational approach for evaluating the brain's capacity for consciousness. Progress in brain research, 177, 201-214.Mediano, P. A., Seth, A. K., & Barrett, A. B. (2018). Measuring integrated information: Comparison of candidate measures in theory and simulation. Entropy, 21(1), 17.Oizumi, M., Albantakis, L., & Tononi, G. (2014). From the phenomenology to the mechanisms of consciousness: integrated information theory 3.0. PLoS computational biology, 10(5), e1003588.Sarasso, S., Casali, A. G., Casarotto, S., Rosanova, M., Sinigaglia, C., & Massimini, M. (2021). Consciousness and complexity: a consilience of evidence. Neuroscience of Consciousness, 7(2), 1-24.Tononi, G. (2004). An information integration theory of consciousness. BMC neuroscience, 5, 1-22.Tononi, G. (2008). Consciousness as integrated information: a provisional manifesto. The Biological Bulletin, 215(3), 216-242. Tononi, G., & Edelman, G. M. (1998). Consciousness and complexity. science, 282(5395), 1846-1851.

Cite this FAQ

Juel, Bjørn Erik, Jeremiah Hendren, Matteo Grasso, and Giulio Tononi. "FAQ: There have been many proposals for calculating Φ. Why should we consider one to be the 'right' measure?" IIT Wiki. Center for Sleep and Consciousness UW–Madison. Updated June 30, 2024. http://www.iit.wiki/faqs/postulates.

Why do we assess the causal power of all orders of mechanisms? Why not simply assess the causal power of the individual units alone? 

The composition postulate requires that we assess the cause–effect power of all orders of mechanisms. In other words, we don’t just assess the cause–effect power of units A and B (first-order mechanisms), but also of AB together (second-order mechanism). There are two reasons for this: one concerns phenomenology and the other the notion of physical irreducibility.

As regards phenomenology, when we introspect the content of our experience, we find it is structured in a particular way. For example, when we see a face, we see it as having components: there are two eyes, one nose, one mouth, and so on. These components are of different orders: the face as a whole occupies a bigger portion of our visual field than any of its components, and each component is included in it. This is evidence that our experience is composed of distinctions of different orders (e.g., big and small) related in various ways (e.g., some include others). 

In physical terms, therefore, IIT conjectures that higher-order phenomenal distinctions (e.g., “face”) should correspond to the higher-order causal distinctions and the mechanisms that specify them. This seems odd to many because they assume that if we know the causes (or effects) of individual units of a system (e.g., A, B, C, and D), we can predict the dynamics of the entire system (ABCD). This view can be called “causal reductionism”: if one knows what caused, say, A and B to fire separately, it doesn’t matter to consider the cause of AB firing together—in fact, factoring in AB may “overdetermine” causation in the system. 

This view, however, conflates causation with prediction, which can be dissociated. Here’s a simple intuition pump to see why: imagine two parallel rows of falling dominoes that share a common initial “detonator” domino. The falling of a domino in one chain can be predicted by the falling of a domino in the other chain, though it is clearly not caused by it (for a complete explanation, consult Grasso et al. 2021).

When we unfold the causal powers of a system in a state, we must assess the causal powers of each candidate distinction to discover which ones “exist”—that is, which ones have causal power that cannot be reduced. We check this by seeing whether there is a way to partition the mechanism such that it makes no difference to the intrinsic information it specifies about its cause and effect. (To fully understand this, and the example here, it may be helpful to first become familiar with the Information and Integration postulates.) 

Consider the example in the figure here. If we consider units C and D, there are three possible mechanisms (C, D, and CD). The first-order mechanisms C and D are irreducible because by partitioning the mechanism, we lose information about its cause. That is, given that C (left) is an XNOR gate, it fires only if both A and B are in the same state (both ON or both OFF—black bars, 50% probability each). Thus by partitioning C, we lose information about the state of A and B, which is now mere chance (orange bars, 25% probability for each of the four states). (This partition is called a “disintegration cut” because it severs the mechanism from its purview completely.) This holds for mechanism D as well (middle): partitioning the mechanism, we go from specifying with probability 1 that B was ON (black bar) to mere chance (orange bars). 

This analysis is roughly in line with causal reductionism so far. But what many may miss is that the same operational criteria hold for mechanism CD (right): if we partition mechanism CD, we lose information about the state of AB. In fact, only the second-order mechanism CD is able to establish that A and B were both ON (black bar, probability 100%). This is something that neither C alone nor D alone is able to specify, as we’ve already seen. Moreover, CD specifies this information irreducibly. That is to say that if we partition mechanism CD, we lose information about the state of AB (black bar to orange bars). 

In short, causal reductionism is incoherent because the criterion to assess irreducibility of first-order mechanisms can be applied to higher-order ones too, and there is no reason not to do so. Just as first-order mechanisms are irreducible because the “disintegration cut” makes a difference to the causes and effects they specify, higher-order mechanisms can be irreducible if every way of partitioning them makes a difference to the causes and effects they specify. 

When we assess the causal powers of a system, we must capture them in full, without leaving anything out. This means that when we apply the postulates of intrinsicality, information, integration, and exclusion, we must do so in an exhaustive, compositional way. Nothing limits us to considering only first-order mechanisms (e.g., A or B). As we have seen, higher-order mechanisms (e.g., AB) may also satisfy intrinsicality (if they have a cause and an effect within the system), information (if they have a specific cause and effect), integration (if their cause and effect are irreducible to the cause and effect specified by their constituents), and exclusion (if their cause and effect are maximally irreducible). 

To see this argument in an academic paper, see Grasso, M., Albantakis, L., Lang, J. P., & Tononi, G. (2021). Causal reductionism and causal structures. Nature Neuroscience, 24(10), 1348-1355. 

Cite this FAQ

Grasso, Matteo, Bjørn Erik Juel, Jeremiah Hendren, and Giulio Tononi. "FAQ: Why do we assess the causal power of all orders of mechanisms? Why not simply assess the causal power of the individual units alone?" IIT Wiki. Center for Sleep and Consciousness UW–Madison. Updated June 30, 2024. http://www.iit.wiki/faqs/postulates.