Artificial Intelligence (AI) in education covers a wide range of tools - from generative systems that produce text or explanations to adaptive platforms that offer feedback or highlight patterns in learning data. In thinking about learner agency, metacognition and pedagogy, it is helpful to treat AI not simply as a clever utility but as something that now takes part in the learning environment. Its suggestions and explanations inevitably shape how students plan their work, question their understanding and decide on next steps. Agency is about students being able to act with purpose, make decisions and take responsibility for their learning. Metacognition is the ongoing process of monitoring what they know, choosing strategies and adjusting their approach as they go.
AI can support this work when used with care. A well-chosen tool can prompt a student to check their reasoning or reflect on alternatives, and in that sense it becomes a helpful companion for metacognitive thinking. Yet AI also introduces challenges that matter for teaching: if the system proposes the "next move," who is actually directing the learning, and how transparent is the reasoning behind its suggestions? These are not abstract concerns, but everyday practical questions for educators designing tasks in which students make meaning, not just produce output. This page takes the view that AI should be something students think with and also think about - a resource that helps them reflect more deeply, rather than a mechanism that takes thinking away from them.
How to Use This Page
This page helps educators explore how AI can support reflection, autonomy, and metacognitive growth across learning contexts.
If you're new to the topic: Start with the Key Ideas for an overview of how agency and metacognition relate to AI in education.
If you want practical strategies: Read Benefits, Limitations, and Challenges for insights on designing learning environments that sustain learner control and reflective thinking.
If you need examples and resources: Browse Case Studies, Real World Examples, Research Papers, and Webinars and Talks.
Use this page as a basis for discussion with colleagues around assessment design, feedback literacy, and AI-resilient pedagogy.
Key Ideas
AI can act as a mirror that helps learners make their thinking visible. When students receive AI-generated summaries of their writing, alternative explanations of a concept or prompts that highlight gaps or ambiguities, they are encouraged to engage in the core metacognitive processes of monitoring, control and regulation. For example, the Cognitive Mirror framework shows how AI feedback can be used not to replace reasoning but to prompt learners to re-explain their ideas, compare interpretations and adjust their strategies. Open access: https://www.frontiersin.org/articles/10.3389/feduc.2025.1697554/full
Recent research also indicates that structured self-questioning can help students maintain agency when working with generative AI. Tankelevitch et al. (2024) found that learners benefit when AI-supported tasks explicitly require them to judge the quality and relevance of AI output and justify their acceptance or rejection of it. This strengthens their ability to interpret feedback critically rather than rely on it automatically. Open access: https://dl.acm.org/doi/fullHtml/10.1145/3613904.3642902
A short example illustrates this mirror effect: a student uploads a draft paragraph to an AI feedback tool and receives a summary describing its argument structure. The student then compares this summary with their intended message and writes a brief reflection on where the AI's interpretation aligns or diverges from their goal. Through this comparison, the learner clarifies their reasoning and plans targeted revisions. The AI supports reflection, but the thinking remains the student's own.
The risk is that the mirror becomes a substitute - that learners start accepting AI suggestions without questioning them. When that happens, metacognition is outsourced and agency is diminished. The educator's task is to design prompts and activities that keep the learner in the reflective loop. Asking students to explain why they agree or disagree with AI feedback, or to identify what the AI missed, turns the mirror into a tool for deeper understanding rather than a shortcut to surface-level improvement.
AI can broaden access to timely, formative feedback and help students make more informed decisions about how they learn. Tools that generate explanations, suggest revisions or highlight patterns can support learners in monitoring their progress, controlling their strategies and regulating next steps. When such tools are deliberately positioned as prompts for reflection rather than engines of correctness, they can strengthen agency by giving students greater scope to review, iterate and justify their choices. Mouta, Pinto-Llorente and Torrecilla-Sánchez (2025) note that AI has the potential to "expand or erode agency depending on the pedagogical structures in which it is embedded." Open access: https://link.springer.com/article/10.1007/s44206-025-00203-9
Yet automation can just as easily diminish agency. Many AI systems rely on opaque, data-driven models that nudge behaviour by scripting what counts as a strong argument, a clear explanation or an appropriate style. For instance, a student who receives a single AI-generated "improved version" of their paragraph may feel compelled to accept it rather than evaluate it. Over time, this risks shifting metacognitive labour from the learner to the system. Schiff (2021) argues that education must prioritise "education for AI" - developing critical, evaluative and ethical literacies - rather than treating "AI for education" as a substitute for human judgement. Open access: https://doi.org/10.1007/s40593-021-00270-2
Reclaiming agency in this context requires pedagogical designs grounded in critical and authentic assessment. Tasks should offer choice points where learners decide how to respond to AI suggestions and explain the reasoning behind their decisions. One practical approach is to require a short reflective note alongside AI-assisted work: Which suggestions were accepted? Which were rejected? Why? This helps students maintain ownership of the learning process and keeps metacognitive reasoning in view.
At its strongest, AI becomes a catalyst for agency rather than a constraint. Educators can support this by foregrounding dialogue, justification and critical comparison - ensuring that technology amplifies human judgement rather than overshadowing it.
Metacognitive growth is always relational. Learners develop their capacity to monitor understanding, control strategies and regulate next steps through interactions with teachers, peers and the wider learning environment. Research on relational pedagogy highlights that meaningful learning depends not only on tasks and content but on "pedagogies of mattering" where students feel seen, supported and invited into shared meaning-making. (Gravett, Taylor & Fairchild 2021) Open access: https://doi.org/10.1080/13562517.2021.1989580
AI tools can extend aspects of these relationships by providing alternative explanations, mapping reasoning pathways or helping students prepare for dialogue with peers or tutors. Yet they cannot participate in the affective, interpretive or relational dimensions that underpin deep understanding. As Julia Freeland Fisher (2023) notes in her policy commentary for Harvard's Advanced Leadership Initiative, AI may support more human-centred education "only when schools prioritise relationships, not automation." Open access: https://www.sir.advancedleadership.harvard.edu/articles/ai-can-make-schools-more-human-if-schools-prioritize-relationship-metrics
The educator's task is therefore to mediate the triad of learner–peer–technology in ways that preserve agency and foster reflective co-regulation. For instance, a student might use an AI tool to generate an argument map, bring that draft into a peer discussion to analyse how the AI framed their reasoning, and then refine the piece through a short dialogue with the instructor. In this sequence, AI surfaces patterns, peers challenge interpretations and the teacher helps the learner synthesise judgement. The technology contributes, but the relationships do the pedagogical work.
Designing for relational pedagogy in an AI-enabled context involves creating opportunities for learners to question AI output, negotiate meaning with others and articulate their own perspectives. A useful reflective prompt is: How does this activity invite students to make sense of AI feedback with others rather than simply absorb it? When educators foreground dialogue, critique and shared interpretation, AI becomes one resource among many in a learning ecology grounded in human connection and collaborative understanding.
AI can enhance metacognition and learner agency when used within intentional, reflective learning designs that preserve the educator's role as guide and interlocutor. The benefits below illustrate how AI can strengthen metacognitive awareness, autonomy, and ethical reflection, provided it is integrated critically into pedagogy. Each includes a short "How to apply" suggestion to help teachers adapt the ideas for practice.
Encourages metacognitive reflection and self-regulation through AI-informed feedback.
AI tools can analyse learner input - such as written drafts or problem-solving steps - and generate prompts that help students recognise patterns in their thinking. When treated as a reflective mirror rather than an evaluator, this feedback encourages learners to plan, monitor, and adjust their learning strategies.
How to apply: Invite learners to use AI to identify recurring ideas or reasoning patterns in their work, then discuss how these patterns relate to their own thinking processes.
Provides scaffolds for goal-setting, planning, and monitoring progress.
AI-based learning dashboards can help learners visualise progress, identify strengths, and plan next steps. This externalises the management of learning, supporting metacognitive monitoring and ownership of development.
How to apply: Ask students to use AI-generated summaries of their progress as prompts for reflective journalling or peer discussion about learning goals.
Supports diverse learners through personalised metacognitive prompts.
Adaptive or responsive systems can tailor reflective questions or explanations to individual learners. This flexibility can reduce barriers for students who need different types of support or feedback to engage in metacognitive reasoning.
How to apply: Encourage learners to experiment with AI tools that present feedback in multiple modes - visual, verbal, or conceptual - and to reflect on which approaches best help them understand their own thinking.
Enhances formative assessment and feedback literacy.
When used critically, AI can model key features of high-quality feedback: specificity, constructiveness, and relevance. Learners can evaluate AI-generated comments alongside teacher and peer feedback, deepening their understanding of how effective feedback operates.
How to apply: After submitting a draft, ask students to obtain AI feedback and then critique it using your rubric or peer responses, noting where AI insights align or diverge from human feedback.
Frees time for higher-order metacognitive dialogue.
AI can automate routine aspects of formative assessment, allowing teachers to focus on guiding reflection and strategy. This shifts emphasis from surface-level correction to deeper conversations about meaning, process, and self-awareness.
How to apply: Use AI to provide preliminary formative feedback, then hold short conferences where learners interpret and act on that feedback through discussion and goal-setting.
Encourages ethical and critical awareness of technology in learning.
Reflecting with AI invites learners to consider how algorithmic systems shape their thinking and decision-making. Such activity cultivates digital literacy and ethical agency, reinforcing the importance of human judgment in technological contexts.
How to apply: Facilitate class discussions where students critique AI-generated reasoning or feedback, exploring how it reflects assumptions about what counts as "good" thinking or learning.
Together, these applications position AI not as a replacement for teacher insight but as a catalyst for deeper metacognitive engagement. Used critically, AI can extend reflective dialogue, promote learner autonomy, and strengthen feedback literacy. The goal is not for AI to think for learners but to help them think about their learning.
Flavell, J.H. (1979) 'Metacognition and Cognitive Monitoring: A New Area of Cognitive–Developmental Inquiry', American Psychologist, 34(10), pp. 906–911. https://doi.org/10.1037/0003-066X.34.10.906
Holmes, W., Bialik, M. and Fadel, C. (2019) Artificial Intelligence in Education: Promises and Implications for Teaching and Learning. Boston, MA: Center for Curriculum Redesign. https://discovery.ucl.ac.uk/id/eprint/10139722/
Luckin, R. (2018) Machine Learning and Human Intelligence: The Future of Education for the 21st Century. London: UCL IOE Press.
Luckin, R., Holmes, W., Griffiths, M. & Forcier, L. B. (2016) Intelligence Unleashed: An Argument for AI in Education. London: Pearson. Available at: https://oro.open.ac.uk/50104/
Selwyn, N. (2019) Should robots replace teachers? AI and the Future of Education. Cambridge: Polity Press.
Winne, P.H. and Nesbit, J.C. (2009) 'Supporting self-regulated learning with cognitive tools', in Hacker, D.J., Dunlosky, J. and Graesser, A.C. (eds.) Handbook of Metacognition in Education. New York: Routledge, pp. 259–277.
While AI can meaningfully support learner agency and metacognitive reflection, its value depends on thoughtful design and critical use. Without careful framing, AI tools risk narrowing what counts as learning, obscuring meaning-making processes, and undermining the relational and reflective dimensions of pedagogy. Recognising these limitations enables educators to use AI in ways that remain dialogic, transparent, and grounded in professional judgment.
AI feedback reflects data patterns rather than human understanding.
AI models identify statistical regularities rather than interpretive meaning. Their feedback can reproduce existing biases or promote conformity to normative styles of reasoning and expression. This limits learners' ability to engage in authentic reflection or divergent thinking.
How to apply: Encourage students to treat AI feedback as a prompt for interpretation, not a conclusion. Ask them to identify where AI suggestions align - or conflict - with their intended message or reasoning.
AI can promote surface reflection rather than deep metacognition.
Automated comments often focus on surface features such as tone, phrasing, or structure rather than conceptual understanding. Learners may confuse this procedural awareness with genuine metacognitive insight.
How to apply: Use follow-up prompts like "What deeper question does this feedback raise?" or "How could you test this suggestion?" to help students move from compliance to self-evaluation.
Overreliance on automation can erode pedagogical dialogue.
When AI replaces human interaction in formative feedback, opportunities for shared reflection diminish. Metacognition develops through social dialogue - through questioning, negotiation, and collaborative meaning-making - not through automated correction.
How to apply: Retain teacher–student dialogue as the primary space for reflective learning. Use AI to support preliminary feedback, reserving complex interpretive discussion for human conversation.
Opaque algorithms obscure how judgments are formed.
Many AI systems operate as black boxes, offering conclusions without transparent reasoning. This opacity limits learners' ability to reflect on how knowledge or feedback is produced, weakening their metacognitive awareness of process.
How to apply: Choose tools that provide explanatory features or confidence ratings. Engage learners in brief activities that test, critique, or compare AI-generated reasoning to reveal how such systems construct meaning.
Dependence on AI may constrain learner autonomy and creativity.
Students who rely heavily on AI to generate structure, arguments, or feedback risk losing confidence in their own critical judgment. Over-dependence can inhibit curiosity, self-regulation, and originality.
How to apply: Design tasks where students work without AI for part of the process, then reflect on differences in thought patterns or outcomes. Discuss how independent reasoning complements, rather than competes with, technological support
Bias and cultural inequity can shape AI recommendations.
Most AI systems are trained on data that privilege Western academic norms and linguistic styles. Such systems can inadvertently marginalise alternative perspectives and reduce epistemic diversity.
How to apply: Discuss with learners whose voices or assumptions may be embedded in AI responses. Compare AI-generated writing conventions with alternative rhetorical or disciplinary traditions.
Ethical and privacy concerns affect trust and participation.
AI tools often rely on extensive data capture, raising issues of consent and security. These concerns influence how comfortably learners engage with such systems and whether reflection feels safe or performative.
How to apply: Review each tool's privacy policy before classroom use. Model responsible digital practice by discussing consent, transparency, and institutional data ethics.
Acknowledging these limitations allows educators to frame AI critically within metacognitive and relational pedagogies. The goal is not to reject AI but to engage with it as an object of reflection - a technology to be questioned, interpreted, and humanised within the learning process.
Adams, C., Pente, P., Lemermeyer, G. and Rockwell, G. (2023) 'Ethical principles for artificial intelligence in K-12 education', Computers & Education: Artificial Intelligence, 4, 100131. https://doi.org/10.1016/j.caeai.2023.100131
Flavell, J.H. (1979) 'Metacognition and Cognitive Monitoring: A New Area of Cognitive–Developmental Inquiry', American Psychologist, 34(10), pp. 906–911. https://doi.org/10.1037/0003-066X.34.10.906
Holmes, W., Bialik, M. and Fadel, C. (2019) Artificial Intelligence in Education: Promises and Implications for Teaching and Learning. Boston, MA: Center for Curriculum Redesign. https://discovery.ucl.ac.uk/id/eprint/10139722/
Luckin, R. (2018) Machine Learning and Human Intelligence: The Future of Education for the 21st Century. London: UCL IOE Press.
Luckin, R., Holmes, W., Griffiths, M. & Forcier, L. B. (2016) Intelligence Unleashed: An Argument for AI in Education. London: Pearson. Available at: https://oro.open.ac.uk/50104/
Selwyn, N. (2022) Should Robots Replace Teachers? AI and the Future of Education. Cambridge: Polity Press.
Williamson, B. (2021) 'Algorithmic Governance and Education', Critical Studies in Education, 62(1), pp. 1–17.
Winne, P.H. and Nesbit, J.C. (2009) 'Supporting self-regulated learning with cognitive tools', in Hacker, D.J., Dunlosky, J. and Graesser, A.C. (eds.) Handbook of Metacognition in Education. New York: Routledge, pp. 259–277.
Integrating AI into teaching and learning raises complex pedagogical, ethical, and institutional challenges. These challenges are not arguments against AI, but invitations to think critically about how its use reshapes reflection, dialogue, and meaning-making in education. They require educators to consider not only what AI can do, but how it changes the conditions through which learners think, reflect, and act. Each challenge below includes a short "How to apply" suggestion for classroom practice.
Embedding AI in ways that sustain, rather than replace, metacognitive engagement.
AI tools can help visualise thinking processes, but if they automate reflection, learners risk losing the habits of self-questioning that underpin deep learning. The challenge is to design tasks that use AI as a reflective prompt rather than a substitute for metacognition.
How to apply: Treat AI output as an object of analysis, not an answer. After viewing AI feedback, ask learners to articulate how they interpret or contest it, reinforcing their role as active meaning-makers.
Ensuring transparency, interpretability, and ethical accountability.
Many AI systems provide feedback without revealing how it was generated. This opacity prevents both teachers and learners from understanding the logic behind the result, weakening metacognitive reflection on process.
How to apply: Use short class exercises to explore how an AI tool transforms inputs into outputs. Encourage discussion about what this reveals concerning bias, reliability, and the boundaries of algorithmic reasoning.
Balancing efficiency with pedagogical depth.
AI can speed up marking and feedback, yet efficiency alone does not equate to effective learning. The challenge lies in maintaining the slow, dialogic processes through which reflection and understanding deepen.
How to apply: Pair AI-supported feedback with structured reflection activities, such as learner journals or peer discussions, ensuring time saved by automation is reinvested in metacognitive dialogue.
Developing educator confidence and digital pedagogical literacy.
Many teachers feel uncertain about how to integrate AI meaningfully and ethically. This can lead either to avoidance or uncritical use. The challenge is to build collective professional capacity for critical and reflective digital pedagogy.
How to apply: Create collaborative spaces for colleagues to experiment with AI tools, share outcomes, and evaluate how these align with existing metacognitive and pedagogical goals.
Aligning AI with authentic and equitable assessment practices.
AI systems often replicate dominant academic norms, privileging particular linguistic or cultural conventions. This can conflict with authentic assessment approaches that value context, diversity, and learner voice.
How to apply: Test AI tools on diverse student work and analyse how their feedback differs. Discuss these patterns with learners to foreground equity, interpretation, and multiple ways of knowing.
Supporting student criticality and data literacy.
Learners must understand how AI systems work in order to use them reflectively and responsibly. Without explicit guidance, they may accept AI outputs uncritically, undermining metacognitive autonomy.
How to apply: Incorporate brief tasks where students examine AI reasoning, identify assumptions, and justify when they agree or disagree with its suggestions.
Navigating institutional and policy constraints.
Institutional priorities may focus on efficiency or compliance rather than pedagogy. Educators must navigate these structures to ensure AI adoption aligns with ethical, human-centred values. Such frameworks ultimately shape classroom possibilities for reflective and dialogic learning.
How to apply: Advocate for policies that emphasise transparency and pedagogy over automation. Document reflective uses of AI as evidence of good educational practice.
Meeting these challenges demands more than technical proficiency. It requires critical digital pedagogy - a deliberate effort to situate AI within practices of reflection, care, and autonomy. By addressing these challenges, educators reaffirm teaching as an imaginative and ethical act that keeps learning human.
Adams, C., Pente, P., Lemermeyer, G. and Rockwell, G. (2023) 'Ethical principles for artificial intelligence in K-12 education', Computers & Education: Artificial Intelligence, 4, 100131. https://doi.org/10.1016/j.caeai.2023.100131
Biesta, G. (2013) The Beautiful Risk of Education. Boulder, CO: Paradigm Publishers.
Flavell, J.H. (1979) 'Metacognition and Cognitive Monitoring: A New Area of Cognitive–Developmental Inquiry', American Psychologist, 34(10), pp. 906–911. https://doi.org/10.1037/0003-066X.34.10.906
Holmes, W., Bialik, M. and Fadel, C. (2019) Artificial Intelligence in Education: Promises and Implications for Teaching and Learning. Boston, MA: Center for Curriculum Redesign. https://discovery.ucl.ac.uk/id/eprint/10139722/
Luckin, R. (2018) Machine Learning and Human Intelligence: The Future of Education for the 21st Century. London: UCL IOE Press.
Luckin, R., Holmes, W., Griffiths, M. & Forcier, L. B. (2016) Intelligence Unleashed: An Argument for AI in Education. London: Pearson. Available at: https://oro.open.ac.uk/50104/
Selwyn, N. (2019) Should robots replace teachers? AI and the Future of Education. Cambridge: Polity Press.
Williamson, B. (2021) 'Algorithmic Governance and Education', Critical Studies in Education, 62(1), pp. 1–17.
Winne, P.H. and Nesbit, J.C. (2009) 'Supporting self-regulated learning with cognitive tools', in Hacker, D.J., Dunlosky, J. and Graesser, A.C. (eds.) Handbook of Metacognition in Education. New York: Routledge, pp. 259–277.
This section outlines current debates surrounding AI's impact on pedagogy, learner agency, and metacognition. The aim is not to discourage experimentation, but to foreground the tensions and possibilities that accompany AI integration. By engaging critically with these perspectives, educators can sustain reflective, equitable, and human-centred approaches to technology-enhanced learning.
Summary: Research emphasises that while AI can appear to enhance learner autonomy, it also reconfigures the dynamics of agency and authorship. Algorithms shape what counts as reflection or understanding by guiding prompts and evaluating responses. Roe and Perkins (2024) argue that such mediation may "expand or erode learner agency depending on the surrounding pedagogical structures."
Key questions for practice:
Who controls the direction of learning when AI is embedded in reflective activities?
How can educators ensure that students still set goals, select strategies, and regulate their learning rather than simply reacting to AI suggestions?
Reflection prompt: Review one of your own assignments that incorporates AI feedback.
Ask: At what points do students make decisions, and at what points does the system do so for them?
Reference: Roe, J. & Perkins, M. (2024) 'Generative AI and Agency in Education: A Critical Scoping Review and Thematic Analysis.' arXiv pre-print. Open access: https://arxiv.org/abs/2411.00631
Summary: AI's promise of personalisation can obscure issues of equity and inclusion. Mouta, Pinto-Llorente, and Torrecilla-Sánchez (2025) show that algorithmic design may privilege dominant academic norms and disadvantage students with differing cultural or linguistic repertoires. They warn that "collective agency risks being displaced by adaptive automation."
Key questions for practice:
Are AI systems amplifying or constraining diverse ways of knowing and expressing understanding?
How can reflective tasks be designed to include critique of AI's assumptions and data biases?
Reflection prompt: In a group discussion, map who benefits and who might be marginalised by an AI-supported task. Consider accessibility, language, and feedback interpretation.
Reference: Mouta, A., Pinto-Llorente, A.M. & Torrecilla-Sánchez, E.M. (2025) '"Where is Agency Moving to?": Exploring the Interplay between AI Technologies in Education and Human Agency.' Digital Society, 4, 49. Open access: https://link.springer.com/article/10.1007/s44206-025-00203-9
Summary: Scholars caution that AI-enabled feedback can inadvertently encourage surface reflection - monitoring output rather than meaning. Schiff (2021) contends that education must prioritise education for AI - developing learners' ethical, critical, and reflective literacies - over AI for education as a productivity aid. In this view, metacognition involves questioning the interpretive limits of AI rather than accepting its responses.
Key questions for practice:
Does AI prompt deep monitoring, control, and regulation of thought, or does it short-circuit them?
How can educators scaffold metacognitive critique of AI feedback to sustain depth of reasoning?
Reflection prompt: Ask learners to explain an AI suggestion in their own words and state whether they agree with it, including reasons grounded in course concepts.
Reference: Schiff, D. (2021) 'Education for AI, not AI for Education: The Role of Education and Ethics in National AI Policy Strategies.' International Journal of Artificial Intelligence in Education, 32(3), 527–563. https://doi.org/10.1007/s40593-021-00270-2
To move from critique to action, educators can incorporate these perspectives into professional learning and course design:
Team reflection: In your next curriculum or module meeting, select one question from the strands above and discuss how your teaching design preserves or redistributes agency.
Student dialogue: Build a short in-class or online activity where learners evaluate an AI tool's output, identifying where it helps or hinders their reflection process.
Ethical transparency: Share openly with students how AI tools are used in your module, why they are included, and what data or reasoning processes underpin them.
Critical engagement with these debates ensures that AI adoption remains pedagogically intentional and ethically grounded. As educators, our task is to help learners not only use AI but also interpret, question, and reimagine its role in their own meaning-making processes.
Ethical reflection is essential for sustaining learner agency and metacognitive depth in an AI-enabled classroom. Ethics is not an abstract add-on but a practical dimension of pedagogy that shapes how students monitor, control, and regulate their learning in relation to technology. This section outlines four ethical domains - fairness, transparency, authorship, and human agency - with verified open-access references and actions for educators.
Summary
AI systems reflect the data on which they are trained. When those data encode cultural, linguistic, or socio-economic biases, AI feedback risks reproducing inequity. Holmes et al. (2021) argue that ethical AI design in education must address not only algorithmic bias but also "the social context in which AI systems are enacted." Open access: https://discovery.ucl.ac.uk/id/eprint/10125833/1/Holmes%20et%20al.%20-%202021%20-%20Ethics%20of%20AI%20in%20Education%20Towards%20a%20Community-Wid.pdf
Key questions for practice:
Which learners might be disadvantaged by this AI system's assumptions or language model?
How can bias awareness itself become a metacognitive goal within your course?
Educator action:
Pilot an AI tool with a diverse student group. Ask learners to monitor where its feedback may reflect cultural or disciplinary bias, then discuss how they adjusted their responses.
Summary
Transparency ensures that both educators and students understand how AI systems process input and generate feedback. Ethical pedagogy requires that data handling be clear and consent-based. The EDUCAUSE Review essay Generative Artificial Intelligence and Education: A Brief Ethical Reflection on Autonomy (2025) highlights autonomy, authorship, and interpretability as key concerns. Open access: https://er.educause.edu/articles/2025/1/generative-artificial-intelligence-and-education-a-brief-ethical-reflection-on-autonomy
Key questions for practice:
Can learners trace how the AI generated its suggestion or grade?
Do your course materials explain what happens to student data used by AI tools?
Educator action:
Co-create a simple "AI Transparency Log" with students where they record what tool they used, what it produced, how they verified it, and how it shaped their thinking. This strengthens monitoring and evaluation skills.
Summary
The question of authorship lies at the intersection of ethics and agency. If students rely on AI-generated material, who is responsible for meaning-making? Tan and Maravilla (2024) argue that integrity can be re-imagined as "a practice of attribution, reflection, and accountability rather than prohibition." Their open-access paper explores how generative AI can coexist with authentic assessment. Open access: https://arxiv.org/abs/2407.19088
Key questions for practice:
How do you help students distinguish between AI-assisted insight and independent authorship?
Can assessment design foreground reflective attribution rather than detection or policing?
Educator action:
Ask students to include an "AI-use statement" or reflective note explaining where and how they used AI in their work, what they accepted or rejected, and why. This builds regulation and evaluative judgement.
Summary
du Boulay (2022) warns that over-reliance on AI can lead to "delegated cognition," where learners defer to algorithms rather than exercising control. Ethical teaching must therefore embed AI within scaffolds that sustain human interpretation, dialogue, and critical decision-making. Open access: https://link.springer.com/chapter/10.1007/978-981-19-0351-9_6-1
Key questions for practice:
Does the AI task preserve student control over goals, strategies, and evaluation?
Are learners using AI to extend thinking or to outsource it?
Educator action
Redesign one reflective activity to include a step where students critique an AI suggestion and explain why they accept, modify, or reject it. This reinforces metacognitive monitoring and self-regulation.
Conduct an AI ethics audit of one course activity: identify where fairness, transparency, authorship, or agency issues arise.
Add a short student ethics reflection component (e.g., "How does this tool support or limit my autonomy?").
Discuss findings with colleagues to co-develop local guidelines for ethical and transparent AI use.
Encourage students to view ethics as part of metacognitive practice - thinking about how and why they learn with AI, not only what AI produces.
Ethical AI use is a pedagogical commitment, not a technical compliance task. When educators explicitly design for fairness, transparency, authorship, and agency, they model the reflective autonomy that defines authentic learning. Ethical awareness thus becomes a metacognitive act: an ongoing practice of monitoring how technology mediates thought, controlling its role in learning, and regulating its impact on equity and understanding.
Holmes, W., Porayska-Pomsta, K., Holstein, K. et al. (2022) 'Ethics of AI in Education: Towards a Community-Wide Framework.' International Journal of Artificial Intelligence in Education, 32, 504–526. Open access: https://doi.org/10.1007/s40593-021-00239-1
du Boulay, B. (2022) 'Artificial Intelligence in Education and Ethics.' In Zawacki-Richter, O. & Jung, I. (Eds.) Handbook of Open, Distance and Digital Education, pp. 93–108. Open access: https://link.springer.com/chapter/10.1007/978-981-19-0351-9_6-1
Tan, L. & Maravilla, D. (2024) Shaping Integrity: Why Generative Artificial Intelligence Does Not Have to Undermine Education. arXiv preprint. Open access: https://arxiv.org/abs/2407.19088
Strunk, V. and Willis, J. (2025) 'Generative Artificial Intelligence and Education: A Brief Ethical Reflection on Autonomy.' EDUCAUSE Review. Open access: https://er.educause.edu/articles/2025/1/generative-artificial-intelligence-and-education-a-brief-ethical-reflection-on-autonomy
The following empirical case studies illustrate how AI can be embedded to support learner agency and metacognition. Each summary includes verified, freely accessible links to the original study and a brief educator take-away.
Context: Undergraduate STEM activities designed around constructionist principles.
Intervention: Students engaged with ChatGPT and Bing Chat as prompts for questioning, critique, and explanation, using the systems as objects to think with rather than answer generators.
Outcomes: Interaction logs and reflections indicated gains in reflective questioning, self-monitoring of strategies, and conceptual understanding when AI was positioned as a partner for inquiry rather than a solution engine.
Educator take-away: Frame AI as a thinking partner. Ask learners to use AI to generate alternative solution paths, then keep a brief reflection log that records what they accepted, rejected, or revised and why.
Link: EURASIA Journal of Mathematics, Science and Technology Education, Open Access article and PDF: https://doi.org/10.29333/ejmste/13313 and https://www.ejmste.com/download/enhancing-stem-learning-with-chatgpt-and-bing-chat-as-objects-to-think-with-a-case-study-13313.pdf.
Context: Randomised field experiments embedded in undergraduate computer science courses.
Intervention: Students received brief, structured prompts from an LLM to guide post-lesson self-reflection. Conditions compared LLM-guided reflection with business-as-usual and with other scalable reflection prompts.
Outcomes: The LLM-guided reflection condition improved subsequent test performance and self-efficacy measures relative to controls, suggesting that short, structured AI prompts can scaffold metacognitive monitoring at scale.
Educator take-away: After a lesson or task, provide a short AI-generated reflection sequence that asks learners to name one misconception, one strategy that worked, and one change for next time. Follow with a two-minute peer exchange.
Link: Open access preprint: https://arxiv.org/abs/2406.07571 (ACM proceedings version also available).
Context: First-year university writing class with optional use of ChatGPT; qualitative, phenomenological design.
Intervention: Students used ChatGPT across brainstorming, outlining, revising, and editing, and provided artefacts, interviews, and self-reflections documenting decisions and dilemmas.
Outcomes: Students reported benefits for idea generation and organisation, while identifying tensions between efficiency and voice, and between assistance and over-reliance. The study surfaces metacognitive dilemmas that can be made explicit in teaching.
Educator take-away: Use a "compare and account" routine. Require students to submit a short rationale that contrasts their own draft decisions with any AI suggestions they accepted or declined, and why, in relation to the rubric.
Link: Technology, Knowledge and Learning, Open Access article with PDF download: https://link.springer.com/article/10.1007/s10758-024-09744-3.
This real-world implementation from the Chartered Association of Business Schools (Chartered ABS, UK) demonstrates how AI can be used to enhance learner agency, metacognition, and inclusion through deliberate pedagogical design. The example below draws on an open-access institutional case study that is freely available online.
Context: A UK higher-education initiative developed by the Chartered ABS and participating universities to widen access and improve student success. The project targeted first-year, large-cohort undergraduate modules with a focus on students from under-represented backgrounds.
Implementation: Educators introduced AI-enhanced scaffolds including adaptive reflective prompts, personalised analytics dashboards, and peer-AI dialogue exercises. The dashboards provided visual feedback on progress and engagement, prompting students to reflect on questions such as "What does this data show?" and "What will I do next?" Weekly activities combined these tools with guided reflection sessions and peer-led discussions facilitated by teaching staff.
Outcomes: According to the published impact report, student retention improved by approximately 10% and self-reported engagement by 18% following implementation. Students described the tools as "a way to see my progress and plan my learning," highlighting gains in goal-setting and self-monitoring. Educators reported increased reflection quality in learning journals and higher participation in peer-feedback activities. The study emphasised that the success of AI depended on its integration within human-led facilitation and inclusive pedagogical design.
Educator take-away: AI tools can enhance reflection and agency when they are embedded within relational learning environments. Consider:
Pair AI dashboards or analytics tools with a structured "notice-and-act" prompt: "One insight I notice… One change I will make…"
Model short think-aloud sessions showing how to interpret dashboard data critically.
Follow AI-driven reflection with a peer-led review or short discussion to sustain dialogue.
Use institutional analytics ethically - ensure transparency in data use and invite students to critique what the data represents.
Link: Chartered Association of Business Schools (2024). Enhancing Access and Inclusion through AI-Driven Pedagogies: Impact Case Study. Available at: https://charteredabs.org/insights/impact-case-studies/enhancing_access_and_inclusion_through_ai-driven_pedagogies
This example illustrates how AI can be integrated to promote metacognitive growth and learner autonomy when situated within an inclusive, human-centred framework. It reinforces that AI alone does not transform learning; rather, its value lies in how educators frame, mediate, and contextualise its use to sustain reflection, dialogue, and equity.
Chartered Association of Business Schools (2024) Enhancing Access and Inclusion through AI-Driven Pedagogies: Impact Case Study. Available at: https://charteredabs.org/insights/impact-case-studies/enhancing_access_and_inclusion_through_ai-driven_pedagogies (Accessed: 9 November 2025).
Adams, C., Pente, P., Lemermeyer, G. and Rockwell, G. (2023) 'Ethical principles for artificial intelligence in K-12 education', Computers & Education: Artificial Intelligence, 4, 100131.
Annotation: Reviews global AI ethics guidelines for K-12 and distils a set of education-specific principles, including pedagogical appropriateness, children's rights, teacher wellbeing, AI literacy, and attention to structural inequities.
Implications for practice: Use this to frame school or faculty discussions about responsible AI adoption, particularly when designing policies that protect learner agency, support metacognition, and foreground care and justice in classroom AI use.
Biesta, G. (2013) The Beautiful Risk of Education. Boulder, CO: Paradigm Publishers.
Access: Publisher page via Routledge; institutional/library access recommended.
Annotation: Argues that education always involves risk, interruption, and subjectification. Emphasises relational responsibility, dialogue, and human presence over predictable outcomes or data-driven optimisation.
Implications for practice: Use this as a philosophical anchor when integrating AI, ensuring that automation does not erode the "beautiful risk" of open-ended dialogue, judgment, and responsibility in metacognitive work.
Flavell, J.H. (1979) 'Metacognition and cognitive monitoring: A new area of cognitive–developmental inquiry', American Psychologist, 34(10), pp. 906–911.
Access: Available via APA, ResearchGate, and most institutional databases.
Annotation: Foundational definition of metacognition as knowledge about and regulation of one's own cognitive processes. Introduces core distinctions between metacognitive knowledge, experiences, goals, and actions.
Implications for practice: Draw on Flavell's framework when designing AI-supported prompts that explicitly target planning, monitoring, and evaluation.
Holmes, W., Bialik, M. and Fadel, C. (2019) Artificial Intelligence in Education: Promises and Implications for Teaching and Learning. Boston, MA: Center for Curriculum Redesign.
Annotation: Synthesises technical, curricular, and pedagogical dimensions of AI in education, including personalisation, assessment, and formative feedback. Balances potential benefits with systemic risks.
Implications for practice: Use this to situate classroom AI experiments within broader curriculum and system design, ensuring alignment with reflective and agency-enhancing pedagogies.
Luckin, R. (2018) Machine Learning and Human Intelligence: The Future of Education for the 21st Century. London: UCL IOE Press.
Annotation: Proposes a framework for understanding human intelligence and examines how machine learning compares and supports human capabilities. Highlights the enduring value of judgment, ethical reasoning, and contextual sensitivity.
Implications for practice: Use this to support discussions with students and colleagues about what AI cannot do, and to design metacognitive tasks that foreground uniquely human capacities.
Luckin, R., Holmes, W., Griffiths, M. and Forcier, L.B. (2016) Intelligence Unleashed: An Argument for AI in Education. London: Pearson.
Annotation: An accessible manifesto outlining the potential of AI for personalisation, formative feedback, and metacognitive scaffolding. Advocates for augmenting rather than automating teaching.
Implications for practice: Use this as an introductory resource for colleagues and as a framework when designing AI-supported reflective or coaching activities.
Selwyn, N. (2019) Should Robots Replace Teachers? AI and the Future of Education. Cambridge: Polity Press.
Access: Publisher and e-book platforms; institutional/library access recommended.
Annotation: A critical sociological analysis of AI, automation, and the politics of EdTech. Examines power, labour, professional agency, and the risks of technological determinism.
Implications for practice: Use this to frame critical staff or student discussions about the limits of automation and the continuing importance of human-led pedagogy.
Williamson, B. (2015) 'Governing software: networks, databases and algorithmic power in the digital governance of public education', Learning, Media and Technology, 40(1), pp. 83–105.
Access: Open access via publisher and institutional repositories: https://doi.org/10.1080/17439884.2014.924527
Annotation: Analyses how database driven software, learning analytics and networked platforms are used to govern public education in England. Williamson shows how decision making is increasingly delegated to socio algorithmic systems that classify, predict and shape learner and institutional behaviour, raising questions about transparency, accountability and control.
Implications for practice: Use this paper to connect classroom use of AI and analytics with wider issues of digital governance. It is particularly useful when discussing with colleagues how dashboards, recommendation systems or algorithmic feedback may script practice, and why metacognitive and critical data literacies are needed to preserve learner and teacher agency.
Winne, P.H. and Nesbit, J.C. (2009) 'Supporting self-regulated learning with cognitive tools', in Hacker, D.J., Dunlosky, J. and Graesser, A.C. (eds.) Handbook of Metacognition in Education. New York: Routledge, pp. 259–277.
Access: Institutional or library access; widely available through Routledge e-collections.
Annotation: Shows how digital cognitive tools can support self-regulated learning by generating trace data, prompts, and strategy scaffolds, while emphasising alignment with metacognitive theory.
Implications for practice: Use this framework to evaluate whether AI tools genuinely support planning, monitoring, and strategy adjustment, and to design tasks that require students to interpret and act upon AI-generated learning traces.
Artificial Intelligence offers significant potential to enhance learner autonomy, metacognitive awareness, and authentic pedagogical practice. Yet realising these benefits requires more than adopting new tools; it demands an intentional design that places human relationships, ethical awareness, and reflective learning at the centre of AI use.
The U.S. Department of Education's report Artificial Intelligence and the Future of Teaching and Learning (2023) emphasises that "it is imperative to address AI in education now to realise key opportunities, prevent and mitigate emergent risks, and tackle unintended consequences."
Access: https://www.ed.gov/sites/ed/files/documents/ai-report/ai-report.pdf (Accessed: 9 November 2025)
Similarly, the World Economic Forum's paper Shaping the Future of Learning: The Role of AI in Education 4.0 (2024) stresses that AI should be framed "not merely as a driver of efficiency but as a means of enabling deeper forms of learner empowerment and creativity."
Access: https://www3.weforum.org/docs/WEF_Shaping_the_Future_of_Learning_2024.pdf (Accessed: 9 November 2025)
Together, these perspectives highlight three interrelated imperatives for educators and institutions:
Preserve learner agency.
Design tasks that invite students to set goals, monitor progress, and justify choices. Encourage AI use for reflection, exploration, and self-assessment rather than for producing final answers.
Embed AI within relational and metacognitive scaffolds.
Use AI to support - not replace - human dialogue. Pair AI feedback with peer and teacher discussion so that monitoring, control, and regulation become shared reflective practices.
Use AI with ethical vigilance.
Model transparency and critical inquiry. Discuss bias, data use, and interpretability, and involve students in questioning how AI influences understanding and decision-making.
Framed in this way, AI becomes a catalyst for thoughtful inquiry rather than a mechanism of automation. It allows teaching to remain an imaginative and relational act - one that sustains what the Department of Education calls "the humanity of learning" while preparing students to think critically within complex, data-rich environments.
Begin by reviewing one assessment or learning activity. Ask: Where might AI help students reflect rather than simply respond? Then design a short metacognitive prompt - such as "What did the AI overlook?" - to make learning visible and self-aware.
AI should never think for learners; it should enable them to think about their learning - together with peers, educators, and technology in a cycle of dialogue, critique, and creation.