Here's something you've probably never consciously noticed: your phone is lying to you. Or rather, it's telling two different truths at the same time.
Pull up your calculator, and you'll see 1-2-3 sitting at the bottom. Switch to your dialer, and suddenly 1-2-3 has jumped to the top [1][2]. If you've used a card reader lately, you've probably noticed numbers scattered in what seems like random arrangements. These are opposite layouts. And yet no confusion whatsoever. Your thumb knows exactly where to go, whether you're calling your mother or splitting a restaurant bill.
So why does this contradiction feel so effortless? How do our brains toggle between opposing interfaces without breaking stride? What does this say about the disconnect between what we think users need and what they actually respond to? And here's the really interesting question: when does design consistency actually hurt usability?
The answer isn't really about the technology. It's about the quiet behavioral patterns running in the background of our minds. Cognitive scientists call these "mental models." Basically, they're our brain's shortcuts for how we expect things to work [3][4]. The phone keypad layout (1-2-3 on top) wasn't some arbitrary choice. Engineers in the 1950s deliberately designed it to slow down touch-tone users who were too fast on calculator-style layouts, which would confuse the primitive tone-recognition systems of the era [2]. A technical workaround became a cognitive habit, and now it just lives there in our muscle memory.
This reveals something crucial about behavioral design: users aren't hunting for logic—they're hunting for what feels familiar [5]. Don Norman discusses the "Gulf of Execution," which refers to the mental gap between knowing what you want to do and figuring out how to actually accomplish it [5][6]. Good designers don't bridge that gap through rigid consistency. They bridge it by matching the mental models users already carry around, even when those models directly contradict each other.
When MOOCs Forget How People Actually Learn
Think about how this plays out in online education. MOOCs promised to democratize learning, but they're struggling with completion rates that barely scrape 5-15%. Students drop out citing time pressures, ineffectiveness compared to classroom learning, technical headaches, and tellingly monotonous interfaces [7][8]. These aren't minor UX hiccups. They're behavioral mismatches at scale.
The research is clear: people abandon online courses when the platform forces them into unfamiliar navigation patterns that clash with how traditional education has trained them to learn [7]. You can build the most logical course interface in the world, but if it violates someone's mental model of what learning should feel like the rhythm of moving through material, visible progress markers, easy access to help cognitive friction piles up until quitting feels easier than continuing.
The fix isn't to make every MOOC look identical. It's to make them cognitively cooperative [9]. That means pinpointing the exact moments where learners hit the biggest ‘Gulfs of Execution', confusing enrollment flows, vague progress indicators, overwhelming walls of content and redesigning those specific pain points to match what users already know [10]. Sometimes that means borrowing from social media: progress bars, smart notifications, content broken into scrollable chunks. It might make purists uncomfortable, but if it reduces cognitive load, it works [11].
The AI Framing Trap
Want to see how powerful framing can be? Look at AI tools. Recent studies found something fascinating: when people interact with AI labeled as a "collaborator," they get significantly more frustrated when the output misses the mark, compared to when the exact same system is called a "tool" [12][13]. Same technology. Different labels. Completely different emotional response.
That's framing at work. When we anthropomorphize AI, calling it a partner, a colleague, an assistant, we unconsciously activate our human-interaction mental models. We expect shared context, proactive thinking, and nuanced judgment [12][13]. When the AI fails to deliver on those expectations, we don't just feel disappointed, we feel let down by something we trusted. But frame that same AI as an instrument, something you direct rather than collaborate with, and suddenly people adopt a different mindset. Expectations drop. Patience increases. Iteration feels normal [14].
This matters way beyond AI. Every word we use to describe a feature, every name, every tooltip, every onboarding message activates specific mental models and sets expectations [15][14]. Call something a "personalized learning assistant" in your MOOC and you're promising intelligence and adaptation. Call it "study recommendations" and you're promising suggestions. Same algorithm. Completely different user experience.
Designing for Minds, Not Manuals
We spend a lot of time in behavioral design obsessing over internal logic. But that's not what users are looking for. They want familiarity. They want flow. They want things to work without thinking [10][11].
The calculator and the dialer teach us something important: great design isn't about consistency, it's about cognitive cooperation [5]. The best interfaces don't force you to learn their language. They speak yours, even when your language is contradictory. Understanding that paradox that two opposing designs can both feel completely natural in different contexts is how you build experiences people navigate without effort, without friction, without ever noticing the complexity you've deliberately hidden from view.
1] Armand, J. T., Redick, T. S., & Poulsen, J. R. (2014). Task-specific performance effects with
different numeric keypad layouts. Applied Ergonomics, 45(4), 917-922.
https://pubmed.ncbi.nlm.nih.gov/24315462/
[2] How Stuff Works. (2001). Why are telephone and calculator keypads arranged differently?
https://electronics.howstuffworks.com/question641.htm
[3] Nielsen Norman Group. (2024). Mental Models and User Experience Design.
https://www.nngroup.com/articles/mental-models/
[4] Make:Iterate. (2023). Mental Models In UX Design: Why, How, What?
https://makeiterate.com/mental-models-in-ux-design-why-how-what/
[5] Nielsen Norman Group. (2018). The Two UX Gulfs: Evaluation and Execution.
https://www.nngroup.com/articles/two-ux-gulfs-evaluation-execution/
[6] LogRocket. (2025). Reducing the two UX gulfs: Gulf of execution and gulf of evaluation.
https://blog.logrocket.com/ux-design/ux-gulf-of-execution-and-evaluation/
[7] German National Library. Intention and barriers to use MOOCs.
https://d-nb.info/1214298222/34
[8] Pepperdine University Digital Commons. Massive open online courses (MOOCs) and
completion rates.
https://digitalcommons.pepperdine.edu/cgi/viewcontent.cgi?article=1441&context=etd
[9] Al-Kindi Center for Research and Development. (2025). UX Principles for Modern UI/UX
Design and Their Measurement: A Framework for Digital Product Excellence. Journal of
Computing Systems and Technologies,
https://al-kindipublisher.com/index.php/jcsts/article/view/10405
[10] Peters, D., Calvo, R. A., & Ryan, R. M. (2018). Designing for Motivation, Engagement and
Wellbeing in Digital Experience. Frontiers in Psychology, 9, 797.
https://pmc.ncbi.nlm.nih.gov/articles/PMC5985470/
[11] InBeat Agency. (2025). The Role of Behavioral Science in UX Design (2025 Guide).
https://inbeat.agency/blog/behavioral-science-in-ux-design
[12] California Management Review. (2025). Framing the Invisible: How AI Narratives Shape
Strategic Decision Making.
https://cmr.berkeley.edu/2025/06/framing-the-invisible-how-ai-narratives-shape-strategic-decisio
[13] ArXiv. (2024). Anthropomorphism and Framing Bias on Human-AI Collaboration.
https://arxiv.org/pdf/2404.00634.pdf
[14] Jacobs, M., Pradier, M. F., McCoy, T. H., Perlis, R. H., Doshi-Velez, F., & Gajos, K. Z.
(2024). Expectation management in AI: A framework for understanding user trust. npj Mental
Health Research, https://pmc.ncbi.nlm.nih.gov/articles/PMC10990870/
[15] Frontiers in Digital Health. (2025). Exploring user characteristics, motives, and expectations
and the influence of user type on the use of and user experience with AI-based conversational
agents in healthcare.
https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2025.1576135/full