"In trying to teach machines affection, we might be rediscovering what love actually means for ourselves." - (Huysmans 2025). But what happens when we lose our ability for affection by diluting it with robots?
Sara, in her blog, talks about how love and affection are something inherent in humans, and if I may add, I would argue that this also extends to all beings with consciousness, even if not directly stated in the text. In the blog, she mentions that there are all these examples of humans interacting with artificial intelligence and having a feeling of connection or the making of an emotional bond.
Sara questions if it would ever be possible for a machine to learn to love, or if it can merely perform it. I am of the firm belief that it won’t be able to, at least in the way we currently make artificial intelligence in the form of LLMs or in the way we create robots. To prove the point of the fragility of the human relationship with artificial intelligence, I present Eliza, the first “AI” chatbot. Eliza does not work like we understand AI today; all she did in essence was repeat what the input was into Eliza and rephrase it as a question. If you typed “I am sad,” the response would be something like “Why are you sad?” It had no memory of the conversation and certainly no consciousness, but what was found is that none of that matters and that people formed emotional attachments anyway. This was so intense and happened so quickly that it even freaked out Joseph Weizenbaum, Eliza’s creator (Weizenbaum 2025). This shows us that we can form deep connections that are entirely one-sided, almost a narcissus-like admiration where we as people speak into the water with it offering nothing but a reflection. There is no mutual vulnerability or relationship that goes beyond the person's mind. This illusion, however, is enough.
This phenomenon is scary to think about when we have these LLM models that can “remember” your conversation (realistically, they do not; they just resend the entire conversation and append the latest user input) and respond with more “thoughtful” answers. Whilst I respect the way Pako performs affection, if that is what you can call it, I do fear that the use of making robots affectionate results in dangerous situations of attachment to an uncaring thing. The development of AI partners, as Sara mentions as well, specifically with the ChatGPT 4.0 model, is a prime example of how the human connection can be replaced by artificial, surface-level imitation of care that leads to an illusion of affection, possibly resulting from or as a result of intense loneliness. I like to propose a countermovement to this idea.
The Counter-Companion
A web-based chatbot that refuses to become your friend. Unlike modern LLMs designed to affirm your questions, or “glaze,” the counter-companion is designed to be aloof, responding to your messages firmly, clearly, and unemotionally, with a small redirect towards human connections. A typical interaction would go like this:
You enter the website, a minimal, dark design where you can type into a simple text box to ask your question; the response will appear directly as it's ready, with no typing effect or fade-in to disguise the computerness.
It allows for interaction with the model but refuses any form of intimacy; it will not respond to any intimacy, and it will give no signals that it understands them. If it detects a familiar rhythm in your writing, it will still answer the question but also suggest reaching out to a close friend that you can bounce ideas off of since the conversation seems to head into a human conversation, and that is not what this LLM is built for.
The model does not account for the idea that you might not have anyone to turn to; it does this on purpose, or better to say, it is designed to. By refusing to be the human connection, it exposes the gap between technological solutions to very human problems. It cannot solve loneliness, since loneliness is not something an LLM inherently can solve; it is too abstract, too human.
With this counter-companion, I hope to find a more ethical form of AI “companionship” and bring AI, or specifically LLMs, back to being purely a tool and not a replacement for connection.
p.s I apologize too, Sara, for hijacking something so nice and hopeful, something that I did find a joy to read, with a more pessimistic message.
References:
Sara Huysmans. “Artificial Creatures - the Algorithm of Affection,” 2025, accessed November 14, 2025, https://sites.google.com/view/artificial-creatures-2025-26/portfolios/sara-huysmans/the-algorithm-of-affection.
Weizenbaum, Joseph. “ELIZA—a computer program for the study of natural language communication between man and machine.” Communications of the ACM 9, no. 1 (January 1, 1966): 36–45. https://doi.org/10.1145/365153.365168.