The framework describes how Myllokunmingia’s brain worked in cybernetic terms: every moment begins with an initial state, ends with a goal state, and in between, the brain uses an internal model to guide its actions. This internal model is made up of neuronal proxies – stable configurations of activity in the brain that stand in for things in the environment. These proxies aren’t ideas or interpretations. They’re just patterns that form through experience and help the brain do its job.
To move toward its goal, the brain can’t simply rearrange its own workings directly. It has to act on something outside, and then sense what’s changed. That "outside" might be the water it swims in, or even the stomach contents or blood chemistry of the body it is in. All these things are outside the brain. And since time immemorial, the brain has had an imperative – it must reach its goal states by changing its sensory input, and it can do that only by acting on the world around it. So it forms a cycle: it acts on the world, detects the change, and adjusts – all the while seeking the goal state. That feedback is what allows the brain to guide the body toward its goal. It is a sleekly designed system that works.
It’s how Myllokunmingia worked. It’s how all animals work, including humans.
But humans have something extra.
Very recently in evolutionary terms, certainly within the last two million years, the human brain gained a new add-on subsystem. This subsystem is conventionally described as the faculty of language (from the Latin lingua, tongue). But the framework uses more precise terminology, naming it according to its function: it is called the proxy transfer device (PTD). It has a very simple function: it simply transfers neuronal proxy configurations from one brain to another.
And that's it! That's how the PTD works. That's all it does – at first. For engineering simplicity (evolution works that way) it activates the same neuronal proxy configurations as would be activated in normal experience.
But what are these neuronal proxies?
They’re not symbols, and they’re not pictures. They’re physical configurations in the brain – stable patterns that stand in for something in the environment. They don’t “represent” anything in the philosophical sense. They simply work. Like the pool of mercury in an old-fashioned thermostat, they flip or settle or trigger when the conditions are right. The brain builds these proxies through experience and keeps them ready for reuse.
That’s the entire framework – except for one key feature.
In humans, in early childhood, the brain learns to detect what it is about to say. This is an instance of neural reuse: activity in one part of the brain is “hacked” to help with function elsewhere. The child begins to “catch wind” of its own potential expression, without saying it. How? This is possible because the channels for speaking and listening work in tandem. Phonologists and neuroscientists have known for decades that throughout the two channels there are reciprocal “handshakes” that help each channel function – a kind of resonance. The child’s brain simply learns to attend to that resonance, while suppressing actual speech.
The operation of this loop is laid out in detail in the BAL-looping scientific papers. For now, suffice it to say that this looping is not felt as inner speech or internal monologue. Rather, because the most inward part of each channel is involved, the “gist” is felt directly, without any trace of inner speech. The brain experiences only the appearance of meaning. This is the activation of proxy patterns, the same ones used in seeing, doing, reading, and listening. In the BAL-looping framework, this is subjective experience.
This internal loop has many uses. It enables recollection, a process where the brain rehearses what it might say about its recent experience. To do so, it looks into itself and uses a forensic or archaeological approach to conclude what must have happened, based on the current state – no separate memory banks, records, or photos. This same procedure of internal looping also allows for imagination, planning, and every other kind of subjective experience. In every case, it works the same way: the brain detects what it is almost ready to say – and that detection activates neuronal proxies, resulting in meaning.
In the end, the framework helps explain the most puzzling neurological phenomena – like blindsight, split-brain behavior, and how recollection works as reconstruction, not as the playback of a recording. It also reframes all sorts of cerebral operations, for example, active inference, shifting them from the input side of the central nervous system to the output side. Because in this view, conscious perception is not a window to the world. It does not arise from sensory input, but from the brain’s detection of its own expressive signal – an event located on the side of output.
The framework is simple and logical, though it runs counter to several long-standing mistaken impressions that are hard to unlearn. For this reason, it cannot be effectively presented in the form of an academic treatise. This is why it’s presented in the form of a dialogue – conversations between Haplous and Synergos, who walk the reader through the trickiest turns.
Some readers, however, may prefer a more formal and academically structured presentation of this synopsis. They can find it in sections 1 through 3 of either of the papers Reportable Awareness vs. Foundational Competence: A Functional BAL/Looping Account of Split-Brain Phenomena, or A Functional Basis for Concept Mastery linked on the Scientific Papers page.
But for those ready to begin exploring the dialogues themselves, you can choose your next step below: