Integrated information theory (IIT) is not just a theory of consciousness but also an intrinsic ontology. This framework differs significantly from the prevailing approaches within psychology and neuroscience—and in fact within science at large—because it fully incorporates human experience into its premises and methodology.
This part of the IIT Wiki is under construction. For the time being, below is a list of headings that indicate future content, with a few links to relevant published works.
Please see Findlay et al. (2024):
"Developments in machine learning and computing power suggest that artificial general intelligence is within reach. This raises the question of artificial consciousness: if a computer were to be functionally equivalent to a human, having the same cognitive abilities, would it experience sights, sounds, and thoughts, as we do when we are conscious? Answering this question in a principled manner can only be done on the basis of a theory of consciousness that is grounded in phenomenology and that states the necessary and sufficient conditions for any system, evolved or engineered, to support subjective experience. Here we employ Integrated Information Theory (IIT), which provides principled tools to determine whether a system is conscious, to what degree, and the content of its experience. We consider pairs of systems constituted of simple Boolean units, one of which---a basic stored-program computer---simulates the other with full functional equivalence. By applying the principles of IIT, we demonstrate that (i) two systems can be functionally equivalent without being phenomenally equivalent, and (ii) that this conclusion is not dependent on the simulated system's function. We further demonstrate that, according to IIT, it is possible for a digital computer to simulate our behavior, possibly even by simulating the neurons in our brain, without replicating our experience. This contrasts sharply with computational functionalism, the thesis that performing computations of the right kind is necessary and sufficient for consciousness."
Findlay, G., Marshall, W., Albantakis, L., David, I., Mayner, W.G.P., Koch, C., Tononi, G. (2024). arXiv:2412.04571.Please see Mayner, Juel, and Tononi (Forthcoming):
"Here, we extend the integrated information theory of consciousness to assess how intrinsic meanings are triggered by extrinsic stimuli. Using simple simulated systems, we show that perception is a structured interpretation, triggered by a stimulus but provided by a system’s intrinsic connectivity. We then show that the “matching” between a system and an environment can be measured by assessing the diversity of intrinsic meanings triggered by typical sequences of stimuli. This approach offers a way of understanding how the meaning of an experience, which is necessarily intrinsic to the subject, can refer to extrinsic entities or processes."
Mayner, W., Juel, B., and Tononi, G. Forthcoming. Meaning and matching: quantifying how the structure of experience matches the environment.Please see Only what exists can cause: An intrinsic view of free will:
"If IIT is right, we do have free will in the fundamental sense: we have true alternatives, we make true decisions, and we—not our neurons or atoms—are the true cause of our willed actions and bear true responsibility for them. IIT's argument for true free will hinges on the proper understanding of consciousness as true existence, as captured by its intrinsic powers ontology: what truly exists, in physical terms, are intrinsic entities, and only what truly exists can cause."
Tononi, G., Albantakis, L., Boly, M., Cirelli, C., & Koch, C. (2022). Only what exists can cause: An intrinsic view of free will. arXiv preprint arXiv:2206.02069.