The Imitation Game
“I propose to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think’,” wrote Alan Turing at the start of his seminal paper Computing Machinery and Intelligence. He then introduced what became known as the “imitation game”, later inspiring the title of the biopic on his life. Today, it’s fair to suggest that AI passes the Turing Test so effectively that it can mask the fact that, while it can reason, it is still not human.
We often become so used to words that we forget to examine what they really mean. I think this is the case with artificial intelligence. Perhaps, just for a while, we should replace the term with synthetic reasoning. “Synthetic” (as in synthetic leather or fabric) evokes something not natural, yet functional, with advantages and disadvantages different from its natural counterpart. “Reasoning” implies logical, rational processing. This rephrasing helpfully sidesteps the slippery and often anthropocentric definition of “intelligence,” which in humans is not only logical and rational, but also embodied and partly unconscious.
The phrase “artificial intelligence” risks making humans the benchmark against which machines are always measured— fueling fears that we might eventually be surpassed. In The Coming Wave, this idea is vividly illustrated through AlphaGo’s defeat of a human champion in the ancient board game Go. For many in China, it was the equivalent of a robot winning the Super Bowl. The perceived humiliation sparked China’s aggressive AI race. But the real lesson here is not about competition—humans and AI exist on different spectrums. Humans are good at things AI struggles with, and vice versa.
This insight has practical implications: we need to be thoughtful about where and how we apply AI. Fortunately, a helpful framework exists to guide us. Developed in the late 1990s in the world of knowledge management and organizational design, the Cynefin Framework (pronounced kuh-neh-vin) was created by Dave Snowden. Cynefin—a Welsh word meaning “place of multiple belongings”—helps us understand what kind of problem space we are in, so we can make better decisions.
At its most basic, the Cynefin Framework distinguishes between three types of systems:
Ordered systems, where cause and effect are either clear or discoverable through analysis;
Complex systems, where understanding only emerges through interaction;
Chaotic systems, where turbulence dominates and immediate stabilizing action is needed.
AI thrives in ordered systems, but tends to flounder (or hallucinate) in complex ones. Rather than viewing AI as a rival to human intelligence, we might better see it as a complementary tool. Co-intelligence as Ethan Mollick termed his book about it. AI is powerful in domains where logic and pattern dominate, and a valuable addition to, but not a substitute for the embodied, intuitive, and emotional dimensions of human intelligence.
This brings us to Amara’s Law - “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run”. The imitation game may have been won—but the game of understanding when, where, and why to use AI is only just beginning.
(This essay was cleaned up by ChatGPT)