Note: Comments sections below visible only to Sauk Prairie School District accounts.
Understanding AI: Offloading or Outsourcing Thinking?
by Paul Kirschner
January 13, 2026
https://paulkirschner173727.substack.com/p/offloading-no-outsourcing
Twice in the past 24 hours, I read a piece in which the author described AI as a tool for cognitive offloading. The problem is that the term is being used incorrectly (sorry colleagues who shall remain nameless), and because this confusion seems to be spreading, it’s worth clarifying the difference.
In two different documents, written not by laypeople but by leading scientists, AI was described as something we use for cognitive offloading. But that isn’t what’s happening. Pencil and paper are tools for cognitive offloading. When you solve a math problem, you write down intermediate steps and results so you don’t have to hold them in your working memory. You externalise those steps and partial results and bring them back when needed. An external representation of a discussion in collaborative learning works the same way: it tracks turn-taking and argument structure so participants don’t have to hold that information in their heads (I studied this with Jan van Bruggen in 2002[1]). In both of these cases, you’ve reduced the cognitive load on your working memory by offloading some of what’s taking up space to the sheet of paper or the external representation, freeing up space in your working memory to continue to work on and solve the problem. The paper or diagram just stores information so working memory is freed for further processing. You’re still doing the thinking. That’s cognitive offloading.
What people are doing with AI is something fundamentally different. They aren’t storing intermediate results so they can continue thinking. They’re handing over the thinking itself. That’s cognitive outsourcing.
In this way, AI isn’t “just a tool”. Tools are hammers, calculators, and word processors. They make work easier, but they don’t perform the thinking for you. You decide what to calculate, what to write, and what to change. The tool supports cognition; it doesn’t replace it. AI is different. It doesn’t just make thinking faster; it takes over parts or even all of thinking itself. That’s what outsourcing means.
Outsourcing isn’t just using something external. It’s the deliberate transfer of a function that would normally be performed internally to an external agent that performs it instead. In business, for example, companies outsource payroll, manufacturing, or customer service. The work still happens, but not inside the organisation.
Cognitive outsourcing works the same way. Instead of remembering phone numbers, we store them in our phones. Instead of navigating with mental maps, we follow GPS. Instead of calculating, we use calculators. In each case, the cognitive process still exists, but it’s no longer performed by the mind. It’s moved to a device. And we already know the consequences. People who rely on GPS most often can’t navigate unaided. People who rely on search engines remember less. People who rely on calculators lose fluency in arithmetic. These aren’t moral failures; they’re predictable effects of shifting cognitive work away from the mind.
AI takes this much further. When you ask an AI to summarise an article, draft an email, generate lesson ideas, explain a concept, or write code, you aren’t offloading memory or intermediate steps. You’re outsourcing composition, inference, organisation, and reasoning. You’re transferring the cognitive work itself and not the load. With offloading, you still think, and the artefact (tool) supports you. With outsourcing, the system thinks, and you consume the result. That distinction matters.
A notebook stores what you write. AI decides what to write. A calculator executes operations you specify. AI decides which operations to perform. AI chooses what’s relevant, how ideas are structured, what arguments are made, and what tone is used, often better than your first attempt. That’s what makes it seductive. But it also means the cognitive work is no longer yours.
Outsourcing can be extremely useful. When AI drafts a routine email or produces a first-pass summary, it saves time and frees up working memory, just as a spreadsheet frees you from hand-calculating columns of numbers. But outsourcing also removes practice, and skills grow through practice. Memory improves when you retrieve. Writing improves when you compose. Reasoning improves when you reason. When someone or something, like AI, does these things for you, you get the output without exercising the underlying cognitive machinery. Over time, that changes what you’re able to do. AI simply scales this effect to much more complex forms of thinking.
The danger isn’t just that people become worse at things or ‘dumber’. It’s that they become weak and dependent. Minds adapt to the workload they’re given. I hate to use the muscle analogy when talking about the brain, but if you’re sick and bedridden, your muscles atrophy from lack of use and adapt to the workload and the machinery (e.g., a wheelchair, a walker…). If idea generation, structuring, and explanation are routinely outsourced, those capacities weaken or never develop.
There’s therefore a crucial difference between using AI to support thinking and using it to substitute for thinking. Using AI to critique a draft is different from asking it to write the draft. Using AI to check your reasoning is different from letting it do the reasoning. In the first case, the cognitive work remains yours. In the second, it’s been outsourced.
And that difference is exactly the difference between offloading and outsourcing. Offloading supports cognition. Outsourcing replaces it.
And as long as my blog has inadvertently changed into a sermon: AI isn’t going back in the bottle. As Poco sang: And it’s too late to turn back all the clocks in town. It’s too late to take the X from yesterday. Now the die is cast. Nothing happens in the past. The real question isn’t whether we’ll outsource, because we already are. The question is what we choose to outsource and what we decide to keep doing ourselves.
[1] Van Bruggen, J., Kirschner, P. A., & Jochems, W. (2002). External representation of argumentation in CSCL and the management of cognitive load. Learning and Instruction, 12(1), 121-138. https://doi.org/10.1016/S0959-4752(01)00019-6
By Rod J. Naquin - via Substack
Insight: Terms like "AI literacy" have become "capacious" catch-alls that mean everything and nothing, allowing people to push agendas without addressing technical or pedagogical realities.
"A concept defined loosely enough can accommodate any agenda. But rhetorical emptiness does real damage: it shapes policy, justifies spending, and crowds out sharper thinking."
Insight: Most educators interact with "wrappers" (interfaces with specific rules and goals) rather than the raw AI model. Confusing the two leads us to judge the entire technology based on the design flaws of a single app.
"We end up debating 'what AI can do' when we’re actually comparing different products with different designs serving different purposes... judgments about one product get mistakenly generalized to the technology itself."
Insight: We must stop anthropomorphizing AI as a "knower." It is a tool for processing text patterns, and its utility is entirely dependent on the quality of the inputs and the human oversight involved.
"Large language models are sophisticated text processors. They take inputs and generate outputs. They don’t know things the way humans know things. They don’t have intentions or agendas."
Insight: Productive AI use requires a "Human-in-the-Loop" system where the user provides specific resources to ground the model, moving it from a generic "answer machine" to a specialized "analysis partner."
"When we introduce specific resources, we ground the interaction and transform the model from answer machine into analysis partner... The human never leaves the loop."
Insight: You cannot be "AI literate" in a vacuum. To use AI effectively for a specific task (like writing or history), you must already possess deep knowledge of that specific subject to judge the AI's output.
"There is no generic AI literacy floating free of domain expertise. Becoming skilled means becoming a better question-asker, a more thoughtful curator of inputs, and a sharper judge of outputs, all within your domain."
Insight: Fears of plagiarism and "cognitive offloading" are often symptoms of outdated assignments. If an AI can complete a task perfectly without human thought, the task itself likely didn't require much critical thinking to begin with.
"The real threat isn’t AI; it’s poorly designed tasks that never required much thinking in the first place."