The framework describes how Myllokunmingia’s brain worked in cybernetic terms: every moment begins with an initial state, ends with a goal state, and in between, the brain uses an internal model to guide its actions. This internal model is made up of neuronal proxies – stable configurations of activity in the brain that stand in for things in the environment. These proxies aren’t ideas or interpretations. They’re just patterns that form through experience and help the brain do its job.
To move toward its goal, the brain can’t simply rearrange its own workings directly. It has to act on something outside, and then sense what’s changed. That "outside" might be the water it swims in, or even the stomach contents or blood chemistry of the body it is in. All these things are outside the brain. And since time immemorial, the brain has had an imperative – it must reach its goal states by changing its sensory input, and it can do that only by acting on the world around it. So it forms a cycle: it acts on the world, detects the change, and adjusts – all the while seeking the goal state. That feedback is what allows the brain to guide the body toward its goal. It is a sleekly designed system that works.
It’s how Myllokunmingia worked. It’s how all animals work, including humans.
But humans have something extra.
Very recently in evolutionary terms, certainly within the last two million years, the human brain gained a new add-on subsystem. This subsystem is conventionally described as the faculty of language (from the Latin lingua, tongue). But the framework uses more precise terminology, naming it according to its function: it is called the proxy transfer device (PTD). It has a very simple function: it simply transfers neuronal proxy configurations from one brain to another.
And that's it! That's how the PTD works. That's all it does – at first. For engineering simplicity (evolution works that way) it activates the same neuronal proxy configurations as would be activated in normal experience.
But what are these neuronal proxies?
They’re not symbols, and they’re not pictures. They’re physical configurations in the brain – stable patterns that stand in for something in the environment. They don’t “represent” anything in the philosophical sense. They simply work. Like the pool of mercury in an old-fashioned thermostat, they flip or settle or trigger when the conditions are right. The brain builds these proxies through experience and keeps them ready for reuse.
That’s the entire framework – except for one key feature.
At some point – not in evolution, but in early childhood – the brain learns to detect what it is about to say. This is an example of neural reuse: the brain borrowing one of its own tools and finding something new to do with it. The child starts to “catch wind” of its own expressive signal before it’s spoken. Sometimes it hears the words internally. But most importantly, it can go straight to the end result – the appearance of meaning – which is just the activation of proxy patterns, the same ones used in seeing, doing, reading, and listening.
This internal loop has many uses. It enables recollection, a process where the brain rehearses what it might say about its recent experience. To do so, it looks into itself and uses a forensic or archaeological approach to conclude what must have happened, based on the current state – no separate memory banks, records, or photos. This same procedure of internal looping also allows for imagination, planning, and every other kind of subjective experience. In every case, it works the same way: the brain detects what it is almost ready to say – and that detection activates neuronal proxies, resulting in meaning.
In the end, the framework helps explain the most puzzling neurological phenomena – like blindsight, split-brain behavior, and how recollection works as reconstruction, not as the playback of a recording. It also reframes all sorts of cerebral operations, for example, active inference, shifting them from the input side of the CNS to the output side. Because in this view, conscious perception is not a window to the world. It does not arise from sensory input, but from the brain’s detection of its own expressive signal – an event located on the side of output.
The framework is simple and logical, though it runs counter to several long-standing mistaken impressions that are hard to unlearn. For this reason, it cannot be effectively presented in the form of an academic treatise. This is why it’s presented in the form of a dialogue – conversations between Haplous and Synergos, who walk the reader through the trickiest turns.
Some readers, however, may prefer a more formal and academically structured presentation of this synopsis. They can find it in sections 1 through 3 of this paper: Split-Brain: A Functional Reframing.
But for those ready to begin exploring the dialogues themselves, you can choose your next step below: