Anthropomorphism is the practice of viewing AI and Robots as having human-like characteristics, behaviors, or intelligence. This misconceptions is enhanced as we continue to make AI seem more human (Chat features, casual language, tailored AI, text-to-speech, improved voice, filler words, etc.), as we embed AI into every facet of our lives (Smart devices, AI assistants), and as we have a rising generation that did not know life before AI (LLMs).
It's important to note that AI is not sentient. It's vital to remember that the "A" in AI is "Artificial" - it's fake/human made. Large Language Models (LLMs) are what we often think about when we talk about modern AI (ChatGPT, Google Gemini), and it's important to note that these are machines that simply run algorithms that give us the statistical best responses for us and our current query. The response can be based off of truth/facts, but it can also be based off of its "understanding" (data + statistics) about us (our politics, beliefs, identity, preferences). This takes us back to "AI Sycophancy" (confirmation bias).
When talking about AI, educators are encouraged to replace words that further anthropomorphize AI (see, learns, say, etc.), with words that reiterate it as a machine (take input, identify patterns, provide output, etc.).
Instead of "AI wants... ", say "AI applications are designed to..." to emphasize that they are constructed by developers or "AI developers build applications that..." to emphasize the designers/programmer's intentions the machine is carrying out.
Instead of "listens", say "receives input" to emphasize the technical process.
Instead of "understands", say "process the data to statistically identify connections and patterns".
instead of behavior words like "see", "look", "recognize", "create", "make", use system-related words like "detect", "input", "pattern-match", "generate", and "produce".
This is not easy to do and I mess up on it all the time (I've likely messed up all throughout this website). But AI does not want anything - it's designed to do things. Our students need to understand that difference.
The following was generated by Google Gemini when asked about the term Banal Deception, which came up in a few news articles related to anthropomorphism.
Banal Deception is a term recently popularized by Simone Natale in his book Deceitful Media (2021).
It refers to the subtle, everyday, and often automated ways that technology feigns humanity or intelligence to smooth over interactions with users. Unlike "malicious deception" (scams, fraud, or disinformation), banal deception is not necessarily designed to harm. Instead, it is a structural feature of the user interface designed to make the technology feel more approachable, competent, or "alive."
Banal deception occurs when a system simulates social presence or cognitive ability that it does not actually possess, and the user accepts this simulation as a normal part of the interaction.
It is termed "banal" because it is commonplace, low-stakes, and often unnoticed. We do not usually think of it as "lying" when a GPS says, "I am recalculating," but the use of the pronoun "I" implies a selfhood that does not exist.
Banal deception relies on anthropomorphism and projection. It exploits the human tendency to attribute mind and intention to anything that behaves in a vaguely social way.
Social Scripts: Technologies use "please," "thank you," and "sorry" to mimic social etiquette, triggering a social response from the user (e.g., feeling rude if you don't say "thanks" to Alexa).
Performance of Labor: Interfaces often act out the performance of thinking or working to manage user expectations.
Voice and Personality: The use of gendered names (Siri, Alexa) and conversational tones creates an illusion of intimacy and personality.
The concept of banal deception challenges the idea that "transparent" technology is the goal. Natale argues that deception is not a bug in AI, but a feature.
Functional Necessity: If AI didn't use banal deception (e.g., if Siri spoke in raw code or database queries), most people wouldn't use it. The deception is what makes the tool usable.
Erosion of Truth: Critics argue that constant exposure to banal deception trains humans to lower their guard. If we get used to machines "pretending" to be human in harmless ways, we may be less critical when they simulate humanity in harmful ways (e.g., emotional manipulation by AI companions).
Shift in Blame: By framing technical failures as social ones ("I'm sorry, I didn't get that"), the technology shifts the user's frustration from "this machine is broken" to "this agent is trying their best," often mitigating anger toward the corporation behind the tool.
In short, banal deception is the "white lie" of the digital world—a design choice that prioritizes social smoothness over technical honesty.