Listen up class. This is a test. There are ten ravens on the roof of a house. A hunter shoots one of them with his rifle. How many of the ravens are left on the roof? Think about it for a moment before reading on.
The logical answer is nine of course. Any digital computer would tell you so. Actually the 'right' answer is ten. The gun was silenced and the roof was flat. The dead raven keeled over and the other nine did not notice. But wait a minute, suppose the gun was not silenced and the roof was sloping? Then the answer would be none, because the dead raven would slide off, and the others, startled by the gunshot would take flight. These are the questions that are posed by real life. Real life doesn't work with simple answers to complex, or contextual questions. Real life doesn't work like a digital computer, and neither does your brain.
In an IQ test the 'right' answer is nine, but nine is only one answer, there are other ways of thinking about it. Thinking logically is not the same as thinking. Think back and notice how your brain changed gear when you realised this was not an IQ test. The whole context of the problem determines how we think about it. The context is part of the information. The leap of inspiration, the new idea is rarely the result of logical thought. Logical thought is almost a contradiction in terms. In fact human beings are not very good at thinking logically. The laws of thought are not the laws of logic.
Cognitive scientists and modern neuroscience have addressed these issues. Much of their work on the brain and thinking today is represented in the field called neuro-computing, which has spent nearly 40 years studing the neural network architecture of the brain-- very different from the reflex-arc, stimulus/response model of the brain that 40 years ago launched the invention of the digital computer-- and also served as the model for strategies in NLP. Real brains don/t work in lock-step, one-cell-communicates-to-another-cell processes. Real brains are massively interconnected, widely-distributed, simultaneously operating constellations of parallel processes. Remember when we taught you that a strategy in NLP follows nice, neat steps one-at-a-time? Well, we lied. It all happens at once.
Obviously, our current thinking about internal computations in NLP needs up-dating. This is the first of a series of articles written in an attempt to improve our models of strategies and incorporate some of the valuable discoveries of the past few decades which are currently revolutionizing computer science, neurology and cognitive psychology.
But first, a bit of history.
Ever since Egyptian surgeons cracked open the brain five thousand years ago, we have tried to understand it, and find mechanical models of the way it functions. Aristotle thought the brain's function was to cool the blood. Descartes thought consciousness was stored in the pituitary gland and the rest of the brain stored memories as traces. Other metaphors usually tried to understand how the brain worked in terms of man-made devices of the historical period of the modellers, for examples pipe organs, telephone switchboards, or more recently, digital computers.
Neuro-Linguistic Programming was coined in 1975 as a name for a field that brought together Neurology and Linguistics. The programming refers to how we organise our actions and ideas to produce results and the metaphor comes from computer science. There are vast differences between computers in 1975 and computers now. The most obvious one being that I now have a computer on my desk that is more powerful than the one that took up the whole basement floor at the University of London when I was there in the nineteen seventies. Computers have changed immensely both in power and design, and our understanding of how the brain works is vastly different. The digital computer was never an adequate model of the brain. Artificial intelligence is no substitute for the real thing.
Our proposal is that NLP is caught in an outdated metaphor.
Digital Computers
The beginning of the idea of the 'Thinking machine" probably goes back to George Boole's book An Investigation of the Laws of Thought, written in 1854. Boole devised a way of representing logic mathematically. He believed that the connection between algebra and language showed a higher logic, which he called the laws of thought. Boolean logic is used universally in digital computers.
The next main step was taken nearly one hundred years later by Alan Turing who developed a generalised model of computation that is still the basis of the most complex and powerful machines that are operating today. He proved that a machine could manipulate a binary alphabet of zeros and ones to solve any mathematical problem.
John Von Neumann took Turing's ideas and applied them in practice. He was fascinated by how the mind reasons and believed that he was modelling the brain.
Von Neumann created a computer design that was an innovation at the time, a memory unit that stored both the numbers for the calculations and the instructions (program) for doing it. This was a big step forward, existing computers had to be rewired for each different kind of calculation. Von Neumann thought at the time this common memory was a model of the mind's flexibility, however it gives a bottleneck where the contents of the memory can only be examined one piece at a time. Modern computers have pushed up the speed at which these operations can take place, but the bottleneck remains. The brain has no such bottleneck, it has billions of autonomous neurons that function simultaneously.
Digital computers differ from the way that our brains work in several ways. Digital computers work through a centralised processing unit to handle data. They handle data one unit at a time, and however many units they have working in parallel, they all work in a linear and sequential way. Because of this bottleneck, the faster they can work and the more units you have operating simultaneously, the better. This brings to my mind, quite illogically a picture of people examining the brain of Einstein to see if it was bigger than usual. Because digital computers work in a linear and sequential way, they work through cause and effect and logical algorithms, IF.....THEN..... sequences. Modern research is painting a picture of the brain that describes processes that are much more complex.
Digital computers are often too accurate for their own good. If the answer has to be 'yes' or 'no', this paradoxically limits your thinking. More often it needs to be 'maybe' or perhaps' depending on what else is happening. Great precision can be an advantage in mathematical problems, but usually it is a liability. We select and filter from a range of possibilities and keep as many options open as possible rather than trying to close them down. The results of one train of thought is often fed back through the same process to refine it. Paradoxically it takes enormous computing power to mimic this imprecise quality of of our human thinking.
Now some really crucial differences. Digital computers do not learn, they house a body of knowledge, they suggest a library-database metaphor for the human brain. In a computer, the data is independent of the system that holds it. In the library or database metaphor, it is irrelevant in which library you find the book, it will always be the same. The book or database can be transferred from system to system without change. Now we can see where the metaphor breaks down. You cannot transfer knowledge from one mind to another. The meaning of this article will not be the same to you as it is to me. Meaning is dependent on context as any raven will tell you. There is also the famous story of the computer analysis of accidents in the home. Accidents figures were collected and it was found by statistical analysis that of accidents that take place on the stairs, the vast majority take place on the top and bottom stair. Logical answer; remove the top and bottom stair.
Computers also need a programmer, external to the computer.
There was hope and the promise that computers would 'think' and beat humans at their own game. The best example of this is probably the work that went into developing a computer program that would be able to play chess and hold its own against top chess masters. There were high hopes of this initially. It seemed the ideal test. Chess masters supposedly analysed out trains of possibilities in their mind and the right move was the one that won them material or gave an advantage at the end of the chain. The best chess masters according to this model were the ones who could see the furthest ahead and analyse more trees of possible moves. Unfortunately human players make mistakes. Either they do not consider certain possibilities, because the number of possible moves on a chess board is astronomical, or they embark on a plan only to find that their analysis did not look far enough ahead. Then their opponent would surprise them with a move they had not foreseen. All the computer had to do was to calculate further ahead than a human player and consider all the possibilities that the human player had missed. Simple, at least in principle.
Although computer chess has made big steps forward in the last ten years, the gap between the best chess playing computer programs and the best human chess players is as wide as ever. Top ranking computers have a rating that just gets them into the top hundred world ratings.
When you model excellent chess players, what you typically find is that they have a 'feel' for positions, which they will not analyse out. They base this feeling on the similar ones they have seen in the past. They will not calculate as many moves ahead as a computer, but as they calculate they assess the positions in their mind's eye. They discard many positions as undesirable without bothering to analyse out exactly why they are bad. When one top chess player was asked how many moves he can see ahead, he replied "One. But it's always the best one!"
Even in the simpler game of draughts, the unofficial champion Dr. Marion Tinsley defeated his nearest rival, the computer challenger Chinook by two games in forty. A small margin, but Chinook can calculate three million moves a minute and look up to twenty moves ahead. Dr. Tinsley who had lost only five games since becoming champion in 1955 is reputed to have said. "Chinook has been programmed by man, but I have been programmed by God."
Neuro-Linguistic Metaphor
How has the programming metaphor influenced NLP? Think of modelling. NLP developed originally by taking the patterns of idiosyncratic geniuses (Perls, Satir and and particularly Erickson) and applying them in different fields. This has been incredibly useful and generative in some ways and disastrous in others. Patterns have been taken out of context and this has led to the whole morass of manipulation and values issues. If you extract Erickson's incredible hypnotic influencing skills and treat then as if they can be transferred independent of Erickson's ethics and values you are asking for trouble. The trouble caused is proportional to the power of the tools you have. Perhaps this is why Gregory Bateson, who enthusiastically endorsed Structure of Magic 1 by Bandler and Grinder, is later reported to have said, "NLP? If you come across NLP, run as fast as possible in the opposite direction. I have stopped sending people to study Milton, they all come back power hungry."
Many NLP techniques read like algorithms. Step one: get rapport. Step two access a state. Step three... These step by step models of techniques are useful as long as we remember they do not actually happen sequentially. They are a useful fiction, a frozen abstraction. What is it like to do a six step reframe with a new behaviour generator while collapsing anchors by metaphor?
The other main area where the programming metaphor has had an effect is strategies and modelling. Most of the stimulus response anchoring and strategies model of internal processing were based on Miller, Galanter and Pribram's revolt against the limitations (and behaviourist tyranny) of the stimulus response reflex arc in the central nervous system. As Even one of the fathers of this model, Sir Charles Sherrington (along with Ivan Petrovitch Pavlov), said in 1906, "The simple reflex is a a useful fiction." Their model was improved on by Miller et al. with the addition of a feedback loop to the historically sequential model of neural communication.
The received wisdom on strategies is that you model by eliciting physiology, beliefs and internal sequences of sensorial representations Strategies consist of a sequence of representational systems, with corresponding submodalities. Diagrams of strategies are mapped like algorithms with loops, arrows and steps. These maps are not the territory.
The brain is not a computer
The human brain weighs about three pounds and consists of over 100 billion neurons. The cerebral cortex has over ten billion neurons. It is the connections between the nerve cells that are more important than the cells themselves. A single neuron can have up to one hundred thousand inputs. The cortex has over one million billion connections. If you counted one every second, it would take you thirty two million years.
We do not have the same 'hardware'. No two brains are alike. We are born with all our neurons, and in the first year of life, up to 70% die before some structures are complete. The surviving neurons form an ever more complex web of connections and our brain quadruples in size. Certain connections are reinforced by use, others wither. We learn by trial and error. The nerve cells are specialised, and form a hyperdense web. The brain is not independent of the world, it is shaped by the world. The brain has often been described today by neuroscientists as an interconnected, decentralised, parallel processed, distributed network of simultaneous waves of interactive resonance patterns. The brain is as complex as our vanity hoped for, and our intellect feared.
The computer metaphor would have the mind manipulating a system of symbols with logical rules. If this were so, it could indeed be studied independently of the brain. The mind is not the brain, and to make theories of how the mind works without taking the brain into account is very risky. The brain transcends all models, because it builds all models. Brains use processes that change themselves. They make memories that change the ways we think in the future, they actually make changes in themselves. We build our perceptual filters that determine what we pay attention to. What we pay attention to reinforces some networks and so builds perceptual filters. The brain must sample many different features of the world at the same time. We cannot know in advance what to look for because the world does not come with labels attached. We attach the labels and then often forget that the we did so, thinking instead that the labels are an intrinsic part of the world. Computers can expand the nervous system, they cannot replace or model it. In fact, many cyberneticians build computers merely to better understand how we think the brain might work.
We would like to take a second article to explore the neural network types of computer that are modelled on the way the brain works, and then to start to explore
how to evolve a new model of strategies that is not so digitally based.
Finally, a story from Gregory Bateson. He tells of the man who wanted to know about the mind, what it really was, and whether computers would ever be as intelligent as humans. The man typed the following question into the most powerful contemporary computer (which took a whole floor of a university department), "Do you compute that you will ever think like a human being?"
The machine rumbled and muttered as it started to analyse its own computational habits. Eventually the machine printed its reply on a piece of paper. The man rushed over in excitement and found these words, neatly typed: "That reminds me of a story... "
Bibliography
Bright Air, Brilliant Fire Maurice Edelman
Penguin 1992
The Embodied Mind Francisco Varela
MIT 1991
Apprentices of wonder William Allman
Penguin 1988
Cognisers R. Johnson
Wiley 1988
Brian Van der Horst, an NLP Trainer who lives in Paris, started his professional life as a marine biologist and also writes for "INTELLIGENCE -- The Future of Computing,"an international newsletter specializing in neurocomputing, artifical intelligence and electronic networking.
10th February