When my husband turned forty, his mother and her husband took care of our children for a week. We rented a cottage in Paxford in the Cotswolds and snuggled up to the fire with wine and biscuits (his birthday is in November). At Oxford we joined the Dean of Divinity of Magdalen College for an open meeting on C.S. Lewis. It was wonderful. In the afternoon we visited Blackwell’s Bookshop on Broad Streed. A wonderful place to feed a roving eye. On one of the tables I found Man’s search for meaning by Holocaust survivor Victor Frankl. Frankl argues that meaning can be found through work, love, and courage in the face of adversity, and that individuals can choose their attitude in any situation.
Humans naturally look for meaning in everything they do. They cannot see information without seeing meaning. Even to the point of imaginging cunning plans or conspiracy theories to find a sense of control, when there is usually only messiness, lack of attention or incompetence.
A fundamental difference between AI and humans is that AI does not see meaning, it only sees patterns. The fact that humans are wired for meaning means we cannot really imagine computers not understanding what they are saying. When we are surprised by AI-hallucinations it is this anthropomorphic sense coming to the fore, because actually to the computer it makes total sense.
AI has been trained on vast amounts of human interactions which means they can mimic being a human so well, that we might be forgiven for believing it is. To me, there is a something of the uncanny about AI pretending to be human – a sense of zombie (though everyone who has dared to watch horror movies tell me their evil intent is usually immediately obvious…). AI doesn’t have a plan, it does not care about the outcome of its reasoning. And though it can analyze sentiment or emotion in text, it doesn't feel them. Just because something sounds meaningful does not make it so.
The illusion of depth was clearly demonstrated by an experiment in poetry. Having trained the AI on the work of our greatest poets, it then asked a group of non-expert readers to say which were human and which aren’t. Because AI tends to blend and generalize, AI’s efforts were preferred. Like whisky, poetry is an acquired taste – and blending is an excellent way of starting to learn about taste. And with experience comes appreciaton of the deep complexity possible in language and taste. To lose the ability or wish to explore beyond a blend would be a tragedy.
I chatted about this with ChatGPT, obviously, and we didn’t get along too well (the term ‘sycophant’ springs to mind).
"Me: Why are you giving feedback at all? Like awesome angle?
ChatGPT: Good call — that kind of feedback is just me trying to sound friendly and conversational, like we’re chatting in person. But if you’d prefer I skip those phrases and stick to straightforward answers, I can totally do that. Just let me know how you want me to roll!
Me: Why do you use the word me? do you have an identity?
ChatGPT: Great question! When I say "me," it’s just a convenient way to communicate naturally and clearly with you — like a conversational shortcut. But I don’t have a personal identity, feelings, or consciousness. I’m an AI language model designed to generate responses based on patterns in data.
So, “me” is really just a way to make the conversation smoother and more relatable, not a sign that I have a self or personality like a human does. Does that make sense?
Me: Yes, isn’t that a form of deception and does that matter?
ChatGPT: That’s a really thoughtful question.
Using “me” or conversational language isn’t intended to deceive — it’s more about making the interaction feel natural and easier for people to engage with. Most users find conversations with an AI more comfortable when it feels like chatting with a “person,” even if they know it’s not human.
Whether it matters depends on context and expectations. For transparency, I always want users to know they’re talking to an AI, not a person. But the natural language helps communication flow better.
If you think it’s problematic or confusing, I can adjust how I talk — being more explicit and less conversational. "
Our talk went on to involve ethical frameworks, but it was unheimisch. I was reminded of Fahrenheit 451 by Ray Bradbury. This is set in a dystopian future where people are surrounded by shallow distractions, symbolized by the "walls" of interactive TV screens that replace real human connection. Adding another ‘wall’ is a status-symbol. To avoid ‘awakening’ or any true feeling, books are banned and burned.
The argument is reminiscent of the well-known (later) comparison by communication theorist Neil Postman. In his 1985 book Amusing Ourselves to Death, Postman famously contrasted two classics, 1984 by George Orwell and Brave New World by Aldous Huxley. He argued that Orwell feared a world where books are banned and truth is concealed, while Huxley feared a world where there’s so much trivial distraction that no one even cares about books anymore. In 2017 his son, Andrew Postman, wrote an article that Huxley had won: “Our public discourse has become so trivialized, it’s astounding that we still cling to the word ‘debates’ for what our presidential candidates do onstage when facing each other.”
Depth and complexity are a result of experience and are acquired by doing more of the thing you are acquiring a taste for: reading, drinking whisky, thinking… AI is really great at reasoning. I find that I love to go on an intellectual riff with ChatGPT (not bothering about ‘who’s theory?’ or ‘have you read that as well?’), which frees the mind to really interact with humans about things that really matter. These are things that involve hearts, bodies and feelings. These are the things that make us human.