Is Consciousness Discrete?

A Continuum of Consciousness


Most current theories of consciousness assume that consciousness is a discrete entity and

that there is a point at which a physical system becomes conscious (e.g., when it reaches a

sufficient level of sophistication) and likewise, there are thresholds of brain activity, at

which this activity can `enter’ consciousness, or there are neurons that do or do not

`participate in’ consciousness. This article proposes an alternative conceptualisation of

consciousness, dealing with it not as a discrete entity, but as a continuous one. A continuum

of consciousness is proposed, whereby physical systems are more or less conscious and can

thereby be placed at points on this continuum. Following from this, it is suggested there is

no discrete point or threshold (e.g., of biological sophistication) at which an entity

`becomes’ conscious and likewise, there is no threshold of brain activation that corresponds

with `entry’ into consciousness.


Consciousness has become a popular topic in scientific and philosophical discourse.

However, the concept of consciousness is notoriously difficult to `pin down’ or define in a

meaningful way. The attempt to define consciousness often involves defining properties of

consciousness, those things that define what it is like to be conscious. A typical list of these

properties is as follows (Chalmers, 1996):

 Visual Experiences

 Auditory Experiences

 Tactile Experiences

 Olfactory Experiences

 Taste Experiences

 Experiences of Hot and Cold

 Pain

 Other Bodily Sensations

 Mental Imagery

 Conscious Thought

 Emotions

 The Sense of Self

The above list could be further condensed by grouping `Visual Experiences’, `Auditory

Experiences’, `Tactile Experiences’, `Olfactory Experiences’ and `Taste Experiences’ into a

group called `Sensory Experiences’. The defining conditions of consciousness therefore

appear to include `experience’. Chalmers (1996) suggests that the `problem of experience’ is

the really hard problem of consciousness, as contrasted with relatively easy problems such

as explaining information-processing functions.

What then, does it mean to experience? Nagel (1974) suggests that experience refers to a

property of a system where there is something it is like to be such a system. It is the

subjective aspect of that system. This can be contrasted with the objective aspects of an

information processing system, such as its ability to detect edges in the visual input. As

Chalmers (1996) puts it, ‘There is no issue as to whether these aspects can be explained

scientifically’ (referring to the objective aspects). The issue of a potential scientific

explanation arises, however, when dealing with the subjective aspect of consciousness – the

conscious experience.

The difference between the explanations of the functions of objective aspects and

explanations dealing with the experience of these functions is referred to as the explanatory

gap by Levine (1983). Chalmers (1996) suggests that to explain the experience, we need an

approach that goes beyond the standard methods of cognitive neuroscience, which are

adequate for explaining the functions but appear inadequate in filling the gap. Chalmers

(1996) argues additionally for a non-reductive explanation, where reductionism is

concerned with explanation by breaking the thing being explained into constituent parts.

Chalmers suggests that we should take experience as a fundamental entity, and explain it by

its relationship to other phenomena/entities in the world. The relationship between

conscious experience and the rest of the world is attempted in Nagel’s (1974)

aforementioned characterisation of experience as something it is like to be like.

The `something it is like’ property is referential, in that it requires two entities, one referring

to the other. Furthermore, it seems to require the entity doing the referring to be able to

imagine what it is like to be something else. The referential nature of this property may be a

barrier to it being an objective, empirical property of consciousness. However, as stated by

Chalmers (1996), any adequate theory of consciousness must specify what gives rise to

consciousness (i.e., its empirical pre-conditions) along with a description of what it is like to

be conscious and, what it means to be conscious.

In his outline of what a theory of consciousness would need to do, Chalmers (1996) assumes

a discrete nature of consciousness by claiming that at a certain point of biological

sophistication, this biology gives rise to consciousness, which implies that at one point of

sophistication there is `no consciousness’ and at another point there is. We dispute this

assumption and deal with the concept of consciousness as a continuum, rather than a

discrete, dual entity.

The current article attempts to sketch a theory of consciousness that is inspired by problems

in existing theories which treat consciousness as a discrete entity. In contrast, the current

article proposes a continuum of consciousness, which suggests that different physical

systems can be more or less conscious than one another and that there is no discrete

threshold at which point consciousness `emerges’ or a physical entity `becomes’ conscious.

More specifically, the theory proposed here suggests that organisms are each at one point

on a consciousness continuum at any one time, and that their placement on this continuum

will change throughout there lifespan, although there is continuity between conscious

states in time, such that successive conscious states will fall close to one another. In keeping

with a materialist/physicalist philosophy, the biological makeup of the organism is identified

as the factor determining placement on the consciousness continuum. However, as with other biological

systems, its functioning is best conceptualised as a variable that continuously changes

throughout time and can be measured on a continuum, not on a discrete `on’/`off’ scale.

It is widely accepted that the Biological/Physical Makeup of an organism influences whether

that organism can be considered conscious (i.e., when adopting the prevalent view that

consciousness is a discrete entity). As Chalmers (1996) states, `It is widely agreed that

experience arises from a physical basis’. This quote serves to prove the point that conscious

experience is widely considered to be, at least, related to the physical makeup of the system

under consideration. However, the idea that experience arises (i.e., at some point there is

no experience and at another, there is experience) is disputed in this article. Regardless, the

point being made here is simply that biological/physical makeup influences consciousness,

or, more precisely, helps determine placement on a continuum of consciousness.

It is not satisfactory, in formulating a new theory of consciousness, to say simply that the

biological makeup of an organism determines its placement on the continuum of

consciousness, as this begs the question – what aspects of its biological makeup contribute

the most to its placement on the continuum? Here, we suggest that this placement is

associated highly with the extent to which the organism integrates intention with action.

This necessarily requires an organism that can intend and act, however these two may be

concepts best dealt with on a continuum, so that each organism can intend and act to

different extents and each can integrate the two to different extents.

What is known as intention is a contentious issue in contemporary philosophy. Here,

intention is used to refer to aboutness (Dennet, 1983). That is, the intentional is when one

thing is about something else. In the context of cognition and consciousness, intention is

relating to thoughts that are about something, usually about future actions or future

thoughts. Integration of intention and action basically refers to the idea that intentions will

result in actions and that these actions will be closely related to the intentions, in that the

intentions are about those actions. Integration of Intention and Action obviously requires a

physical system that can both intend and act. If either or both of these properties are not

present, it is not possible for there to be integration between the two.

Integration between Intention and Action is not a new idea in the context of what it means

to be conscious. Chalmers (1996), says, ‘We sometimes say that an action is conscious

precisely when it is deliberate’. This quote makes clear the connection between intention

(deliberation) and action, in the context of consciousness. In the context of the current

theory, greater integrity between intention and action will place the physical system (e.g.,

human being) on a higher point on the consciousness continuum. So, effectively, those

systems that have greater integrity between intention and action are more conscious, at

least according to the current theory. Why should this be the case?

The reasoning behind the relationship between Integrity of Intention and Action and

consciousness relates to the connection between the physical system under consideration

and the rest of the physical world. In the case of a living organism, we can characterise, in a

simple manner, sensory inputs and motor outputs that represent connectivity with the rest

of the physical world. Furthermore, the motor outputs will generate more external input,

creating a feedback loop. Therefore, the motor actions of an organism will, to some extent,

shape its conscious experience. If these motor actions are correlated with what the

organism wants to do, its intention, then they will, to the extent they are correlated with the

intention, represent that intention to the outside world, and again to the organism itself.

This representation means that there is a connection between the `inside’ of the organism

(its intention) and the `outside’ (its action) and also that this connection is reciprocal

(outside to inside and inside to outside). When there is integrity between intention and

action, there is integrity in this reciprocal connection between the organism and its world.

But what does this have to do with consciousness?

The existence of this connection and its integrity means that the organism under

consideration is conscious in that they are representing their world and responding to it in a

meaningful way. The connection, it is proposed, along with its biological underpinnings is

what produce conscious states and organisms that are more connected (have a greater

integrity between intention and action) are therefore more conscious (i.e., higher on the

consciousness continuum). It is the reciprocal observer-environment connection that drives

the development of consciousness and the movement along the consciousness continuum.


In summary, this article has introduced the idea of a continuum of consciousness and has

argued that consciousness should be re-defined as a continuous entity, as opposed to a

discrete one (i.e., where there can be two states – conscious and not conscious). The

reasons for doing so were also outlined. One of the primary reasons is that it removes much

of the confusion and ambiguity currently associated with the term consciousness. Another

reason is that placement of physical systems on the consciousness continuum can be

determined by two factors: biological/physical makeup and the integrity of intention and

action. These two factors have already been considered determinants of subjective

conscious experience by many authors. However, few have focused on the reciprocal

connections between the organism and the environment as being integral to `generating’

conscious experience, or, in the context of the current theory, determining the placement of

a physical system on the consciousness continuum.


Chalmers, D. J. (1996). The Conscious Mind: In Search of a Fundamental Theory. Oxford

University Press. New York.

Dennet, D. (1983). Intentional Systems. Behavioral and Brain Sciences. 6, 343-390.

Levine, J. (1983). Materialism and qualia: The explanatory gap. Pacific Philosophical

Quarterly 64:354-61.

Nagel, Thomas. "What is it like to be a bat?" The philosophical review 83.4 (1974): 435-450.