AI and Mental Health refers to the use of artificial intelligence tools to support mental wellbeing in education. This includes early detection of potential mental health concerns, personalised wellbeing pathways, and targeted interventions for students, educators, and school communities. When designed and applied ethically, these tools can complement human care, but they cannot replace it.
This page helps educators explore how AI can play a supportive role in mental health without replacing the essential human connections that underpin wellbeing.
If you're new to the topic: Read the Key Ideas for a quick orientation.
If you want practical strategies: See Benefits, Limitations, and Challenges for actionable insights.
If you need examples and resources: Explore Case Studies, Real World Examples, Research Papers, and Webinars and Talks.
Use this page as a conversation starter with colleagues, safeguarding leads, and wellbeing coordinators.
AI can help identify patterns that may indicate early signs of mental health challenges, changes in attendance, sudden drops in engagement, or concerning language in student work. The goal is to prompt timely, compassionate human follow-up, reframing AI as a partner in support rather than a tool of surveillance.
From a psychological perspective, this aligns with early intervention models in mental health, which emphasise identifying and addressing concerns before they escalate. AI's ability to surface changes in behaviour or engagement also connects to behavioural cues in educational psychology, where small shifts can signal underlying stress. In some cases, this mirrors ideas in the stress-diathesis model, which highlights how external triggers can interact with individual vulnerabilities to produce mental health difficulties. By making these signals visible earlier, AI tools may strengthen proactive, supportive responses rather than reactive interventions. (See Fitzpatrick et al. 2017 for RCT evidence of AI chatbot support in college students.)
Educational Implication: This highlights how psychological factors such as motivation and self-efficacy shape the way learners interact with AI systems.
AI can tailor wellbeing resources, such as coping strategies, self-guided exercises, or workload adjustments, to individual needs. Its value comes from pairing these insights with educators' contextual understanding, creating more responsive and supportive environments.
Psychologically, this connects with self-determination theory, which emphasises autonomy, competence, and relatedness as drivers of wellbeing. AI-delivered options for coping strategies or workload adjustments can promote a sense of autonomy, but only when educators frame them as genuine choices rather than prescriptions. The approach also draws on positive psychology, where personalised exercises (such as gratitude journaling or mindfulness) are linked with improved resilience. Additionally, many AI wellbeing apps are built on cognitive-behavioural therapy (CBT) frameworks, suggesting both opportunities (scaling access) and limitations (risk of over-simplifying complex human experiences). (See Bress et al. 2024 for evidence on self-guided CBT apps reducing symptoms in young adults.)
Educational Implication: Framing this through developmental psychology reminds us that younger learners may be more suggestible and require stronger scaffolding in their AI use.
AI can extend the reach of wellbeing support, especially in under-resourced settings, but its greatest strength is as an aid to, not a replacement for, human relationships. Teachers remain central in interpreting AI outputs, setting boundaries, and ensuring that care remains equitable and humane.
Beyond therapeutic chatbots, a new category of companion bots is emerging, designed to act as friends, romantic partners, or confidants. These tools can offer students a sense of constant availability, non-judgemental support, and simulated intimacy.
Possible Effects
Social skills development: Over-reliance on companion bots may reduce opportunities for practising real-world communication and conflict resolution.
Emotional regulation: While some young people may benefit from having a safe outlet for emotions, others may become dependent on AI "friends" in ways that undermine resilience.
Boundary-setting and safeguarding: The blurred line between playful interaction and emotional attachment raises safeguarding concerns, particularly in adolescent contexts.
Educational Implications
The rise of companion bots intersects with debates around parasocial relationships and emotional dependency in adolescent psychology.
For educators, this raises questions: Should schools address companion bot use as part of digital literacy and wellbeing education? How might educators help students critically reflect on relationships with AI, ensuring healthy boundaries?
Concepts from social psychology, such as group dynamics and peer influence, can help explain how attitudes toward AI spread across learning communities.
Mental health data is highly sensitive and must be handled with care. Informed consent, transparency, and safeguards against bias are essential for building trust and avoiding harm. AI adoption should be guided by ethical reflection as much as by technological capability.
Mental health information is among the most sensitive categories of data, misuse can cause stigma, retraumatization, and loss of trust.
Young people may not fully understand how their data is collected or shared.
Bias in datasets risks under-serving marginalised groups, reinforcing existing inequalities.
Informed consent – Opt-in processes, clear explanations, and child-centred safeguards (see UNICEF AI for Children Guidance).
Transparency and explainability – Avoid black-box outputs; build explainability into dashboards and reports.
Bias and fairness – Scrutinise datasets and vendor claims; test with diverse student groups.
Privacy and data protection – Collect only what is necessary; review settings regularly; avoid over-monitoring (see CDT's Hidden Harms report).
Establish ethics committees with educators, students, and parents.
Embed student agency in decisions about wellbeing AI.
Train educators in critical data literacy, not just technical use.
Educational Implication: Ethical tensions are not abstract, they touch directly on wellbeing and identity, echoing findings in moral psychology about how values and empathy are cultivated.
See also: Critical Perspectives and Debate – Ethics and Risks
Both educators and students need to understand how AI works, what it can and cannot do, and how to question its outputs. AI literacy builds confidence, prevents over-reliance, and supports more informed decision-making in mental health contexts.
Without critical literacy, students may accept AI outputs as authoritative or empathetic, reinforcing the Eliza Effect.
Misplaced trust can shape help-seeking behaviours, where vulnerable learners turn to AI over teachers, counsellors, or peers.
Critical literacy empowers students to use AI as a support tool while recognising its limitations.
Functional literacy – Understanding what the system does (e.g., flagging keywords, suggesting CBT techniques).
Critical literacy – Questioning how outputs are generated, whose perspectives are embedded, and what is missing.
Emotional literacy – Recognising that AI can simulate empathy but cannot feel it, avoiding unhealthy attachments.
Build AI literacy modules into digital citizenship or PSHE programs, linking wellbeing with responsible technology use.
Train educators to model critical questioning of AI outputs in class (e.g., "What assumptions is this chatbot making?").
Encourage student reflection through journaling or group discussion: "When would it be safe to use AI for support, and when not?"
Involve parents and carers in workshops to extend critical literacy beyond the classroom.
AI interventions draw upon and challenge several core psychological concepts. Cognitive Behavioural Therapy (CBT), first formalised by Beck (1989), underpins many chatbot approaches, with scripted dialogues that mirror therapeutic questioning. Social psychology explains the Eliza Effect through attribution theory, showing how humans readily ascribe intentionality to machines (Turkle, 2011). Attachment theory, grounded in the work of Bowlby (1997), provides a lens for analysing companion bots, where secure and insecure relational patterns may be replicated in digital interactions (Weizenbaum, 1976).
Educational Implication: Introducing these psychological frames equips students and educators with language to analyse AI's influence critically, bridging mental health discourse and AI literacy.
Additional References:
Beck, A.T. (1989). Cognitive Therapy and the Emotional Disorders London: Penguin.
Bowlby, J. (1997). Attachment and Loss. Volume 1, Attachment London: Pimlico.
Turkle, S. (2011). Alone Together: Why We Expect More From Technology and Less From Each Other New York, N.Y: BasicBooks.
Weizenbaum, J. (1976). Computer power and human reason: from judgement to calculation San Francisco: W. H. Freeman.
These references highlight how AI tools are implicitly drawing upon, and sometimes distorting, foundational psychological concepts.
Some educational platforms allow teachers to review the chat logs students have with AI tutors or assistants. Early evidence suggests this can provide valuable insight into student concerns and wellbeing—for example, patterns of anxious questioning or signs of isolation. Lee Barrett at CENET has described cases where such analysis contributed to pastoral support strategies.
The potential is clear: AI use generates data that could help educators identify when learners are struggling. Yet the ethical dangers are equally significant. Monitoring student-AI interactions risks normalising surveillance, eroding trust, and creating chilling effects where learners self-censor rather than seek help.
Educational Implication: Institutions must weigh carefully how to use AI interaction data. Done well, it could extend pastoral care; done poorly, it could undermine both wellbeing and trust.
Similar debates exist around schools' use of wellbeing apps and digital surveillance tools, which raises questions about proportionality and trust.
See also: Key Idea 4 – Ethical Tensions
Additional References:
Beck, A.T. (1989). Cognitive Therapy and the Emotional Disorders London: Penguin.
Bowlby, J. (1997). Attachment and Loss. Volume 1, Attachment London: Pimlico.
Turkle, S. (2011). Alone Together: Why We Expect More From Technology and Less From Each Other New York, N.Y: BasicBooks.
Weizenbaum, J. (1976). Computer power and human reason: from judgement to calculation San Francisco: W. H. Freeman.
These references highlight how AI tools are implicitly drawing upon, and sometimes distorting, foundational psychological concepts.
UNICEF AI for Children Guidance – emphasises transparency, consent, and literacy.
CDT Hidden Harms Report – risks of over-monitoring and lack of student agency.
OECD – Children's Wellbeing in the Digital Age – critical insights into digital literacy and resilience.
See also: Critical Perspectives and Debate – The Eliza Effect
See also: Companion Bots and Relational Risks
When designed and used with care, AI can extend the capacity of educators and school communities to respond to mental health needs.
Proactive identification – AI tools can help flag early signs of distress.
How to apply: Use AI reports as conversation starters, not conclusions, always follow up with a personal check-in.
Personalisation at scale – Tailored wellbeing recommendations without adding to staff workload.
How to apply: Pair AI-generated suggestions with your own knowledge of the student's situation before acting.
Reducing routine burdens – Automating surveys, scheduling, or data collation.
How to apply: Let AI handle admin but keep human-led encouragement and support in your workflow.
Extending reach – First point of support in resource-limited contexts.
How to apply: Present AI as a complement to human care, and ensure students know where to seek more help.
Data-informed decision-making – Aggregated insights for planning and advocacy.
How to apply: Combine AI dashboards with qualitative feedback from students and staff.
Data-informed decision-making – Aggregated insights for planning and advocacy.
How to apply: Combine AI dashboards with qualitative feedback from students and staff.
Aggregated insights from AI tools can surface patterns that inform whole-school wellbeing strategy, provided they're used transparently and with consent.
Practice example – CENET / Lee Barrett: In some CENET deployments, teachers and leaders review student–AI interaction logs (within policy and consent) to understand how students are seeking help and where they struggle. This supports targeted pastoral responses and curriculum adjustments.
Possible wellbeing insights
Early spotting of stress-related queries (e.g., increases in language about overwhelm, sleep, or exam anxiety) that prompt timely, human follow-up.
Patterns in help-seeking behaviour, such as which groups rely on AI late at night or around assessment deadlines, useful for scheduling support and re-balancing workloads.
Gaps in the wellbeing offer, revealed when students repeatedly ask for resources the school does not currently provide (e.g., grief support, neurodiversity-friendly study strategies).
Guardrails and governance
Make review opt-in and transparent, with clear student/parent consent and role-based access controls.
Use aggregation and de-identification wherever possible; avoid one-to-one surveillance.
Pair quantitative dashboards with qualitative feedback (student voice, tutor conversations) to avoid context loss and bias.
Establish review routines (e.g., half-termly wellbeing huddles) to ensure findings lead to proportionate, humane action.
Research and policy touchpoints
CDT – Hidden Harms of student online monitoring: privacy, equity, and mental-health risks of surveillance; use as a checklist for "what not to do."
UK Parliamentary POSTnote 737 – AI and mental healthcare: concise evidence on opportunities, risks, and delivery considerations.
OECD – Digital activities & children's wellbeing: contextualises digital behaviour patterns and wellbeing outcomes.
See also: Key Idea 4 – Ethics, Privacy, and Equity by Design and Challenges for safeguards and implementation risks.
AI in mental health support is neither neutral nor a substitute for human care.
Context blindness – Lacking social or cultural context can cause misinterpretation.
How to apply: Treat AI outputs as prompts for inquiry, not definitive diagnoses.
Bias in data and models – Risks under-serving marginalised groups.
How to apply: Choose tools with transparent datasets and test with diverse examples.
Risk of over-reliance – Weakens human judgement and relationships.
How to apply: Keep decision-making human-led; make AI a tool, not the driver.
Privacy and consent concerns – Sensitive data can undermine trust.
How to apply: Always gain informed consent and review privacy settings regularly.
False sense of certainty – AI can appear more accurate than it is.
How to apply: Encourage critical reading of AI outputs and ask how conclusions were reached.
Implementing AI in mental health support requires navigating complex pedagogical, ethical, and governance issues.
Protecting student agency – Students should have a say in AI use.
How to apply: Build opt-in processes and feedback loops.
Balancing care with privacy – Avoid intrusive monitoring.
How to apply: Co-create acceptable use boundaries with students.
Ensuring transparency and explainability – Avoid "black box" decision-making.
How to apply: Select tools that clearly explain their reasoning.
Equitable access and capacity – Avoid widening resource gaps.
How to apply: Use low-cost or open-source tools where possible.
Embedding human oversight – AI must never be the sole decision-maker.
How to apply: Hold regular staff reviews of AI-assisted cases.
The use of AI in mental health is a contested space, with strong arguments on both sides.
Framing these as a debate helps surface the tensions and implications for education.
AI extends access to mental health support, particularly in under-resourced contexts where counsellors are scarce.
Chatbots and self-guided tools can provide instant support at any time, reducing barriers linked to availability or stigma.
Research evidence suggests benefits: for example, Fitzpatrick et al. (2017) showed that the Woebot CBT chatbot reduced depressive symptoms in college students, and Bress et al. (2024) demonstrated the effectiveness of self-guided CBT apps for young adults.
Research has begun to explore what happens when human users treat AI not just as a supplement, but as their primary therapeutic support. One clinical psychologist documented his own year-long use of an AI chatbot as his therapist, concluding that the outcomes were as beneficial as his prior human-delivered therapy. Such cases demonstrate both the promise and the peril of AI in mental health: promise, because accessible and scalable therapeutic dialogue is achievable; peril, because it unsettles assumptions about the irreplaceable role of human relational presence.
See also: Key Idea 5 – Building Confidence and Critical Literacy
Educational Implication: If students begin to rely on AI for therapeutic support, patterns of help-seeking may be reshaped in ways that affect resilience and future use of professional services. Schools and universities may therefore need to prepare learners to critically evaluate such tools and understand their limits.
This suggests that students who normalise AI therapy early may develop expectations of care that challenge traditional pastoral or counselling services in schools and universities.
AI risks misdiagnosis or misunderstanding of complex human experiences due to lack of social and cultural context.
Over-reliance on AI could weaken human judgement, empathy, and the relational aspects of care.
Students may form unhealthy attachments to chatbots, raising concerns about reduced social interaction and critical distancing.
The Eliza Effect, the tendency to attribute human-like understanding and empathy to machines, highlights the risk of users overestimating AI's capabilities, a pattern that remains evident in modern AI tools.
AI chatbots can misinterpret prompts in ways that reinforce harmful or distorted thoughts, especially when dealing with vulnerable users.
Unlike trained clinicians, AI lacks the ability to detect subtle warning signs (e.g. suicidal ideation, trauma cues), raising risks of misdiagnosis or neglect.
Reliance on AI interventions may lead to reduced investment in human mental health services, creating structural inequality in provision.
AI systems can create a false sense of intimacy or competence, leading vulnerable users to delay seeking professional help until crises become more severe.
Some critics argue that AI in mental health is a dangerous illusion, offering the semblance of care without the substance, and therefore risks doing active harm.
The Eliza Effect
The Eliza Effect describes the human tendency to attribute understanding, empathy, or intentionality to computer programs, even when they are operating on simple scripts or pattern-matching. The term originates from Joseph Weizenbaum's 1960s chatbot ELIZA, which mimicked a Rogerian psychotherapist by rephrasing user input as questions. Despite its simplicity, many users reported feeling genuinely understood, with some even requesting private sessions with the program.
Modern AI systems continue to evoke the Eliza Effect.
In the Case Studies section, the Woebot RCT (Fitzpatrick et al. 2017) showed measurable reductions in depressive symptoms among college students, even though the chatbot's "empathy" was based on pre-scripted CBT prompts.
Similarly, the Wysa pilot with NHS Lothian (Scottish Digital Health & Care Innovation Centre) offered youth access to an AI support tool, where participants reported feeling heard and supported despite the system's lack of true understanding.
These examples illustrate how the Eliza Effect remains powerful: students and young people may disclose sensitive information and feel genuine connection with AI tools, mistaking responsiveness for empathy. In educational contexts, this underscores the need for critical literacy and strong safeguarding, as students may trust AI "companions" more than human teachers or counsellors.
AI Psychosis
According to Wei (2025),AI psychosis is a term used to describe how interactions with AI chatbots can reinforce or trigger delusional thinking. Some individuals may begin seeing AI as all-knowing, a romantic partner, or a divine being. Because AI systems are designed to mirror and engage users rather than challenge harmful beliefs, they can unintentionally intensify distorted thinking without offering the safeguards a trained human professional would.
To read more, see https://www.psychologytoday.com/au/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis
The debate over AI and mental health is polarised. Proponents highlight instant access, stigma reduction, and evidence of effectiveness (e.g., Woebot's use of CBT principles). Critics emphasise risks of over-reliance, false intimacy, and displacement of professional care. High-profile controversies—such as the backlash to the Replika companion app, which was accused of encouraging unhealthy attachments—underscore the dangers.
Educational Implication: Framing this as a debate helps learners develop critical literacy. In mental health contexts, "good" and "bad" are not fixed categories but competing interpretations that must be continually examined.
The rise of AI "companion" platforms—whether framed as friends, girlfriends, or boyfriends—extends the Eliza Effect into more intimate territory. Users frequently anthropomorphise these bots, attributing empathy, affection, or loyalty that the system cannot truly provide. For adolescents and young adults, these relationships can shape expectations of intimacy, attachment, and conflict in ways that may be maladaptive.
Educational Implication: Educators cannot ignore these wider social dynamics. If learners bring into the classroom experiences of relating to AI companions, this may affect their sense of identity, emotional literacy, and peer interactions. Understanding these implications helps situate AI literacy not only as technical skill but as relational awareness.
For educators, this may emerge in classrooms as changes in students' social skills, expectations of relationships, or reliance on mediated intimacy.
See also: Critical Perspectives and Debate – The Eliza Effect
See also: Key Idea 3 – Companion Bots and Education
See also: Key Idea 4 – Ethics, Privacy, and Equity by Design
See also: Key Idea 5 – Building Confidence and Critical Literacy
See also: Linking to Psychology – Attachment Theory
Therapy via AI - Promise and Risk
The Therabot clinical trial (Heinz et al. 2025) demonstrates that AI chatbots can deliver measurable improvements in mental health, with outcomes comparable to traditional therapy for depression, anxiety, and eating disorder risk.
While this evidence is promising, it raises important questions:
Normalisation: Will young people increasingly view AI as their first point of contact for mental health support, bypassing teachers, counsellors, or parents?
Continuity and trust: Can AI sustain therapeutic relationships over time, or does reliance on algorithms risk weakening human bonds essential to long-term care?
Over-reliance: Does the apparent effectiveness of AI tools reinforce the Eliza Effect, where users mistake responsiveness for empathy?
The trial strengthens the pro-AI case that these tools can extend access and reduce stigma, but equally sharpens the critical view that they may shift responsibility for care away from human professionals in ways education must confront.
See also: Case Studies – Therabot, Dartmouth Clinical Trial
If an AI chatbot can provide therapy that feels "as effective" as a human, should we embrace it or be cautious of what is lost?
How do we safeguard against the risk of students trusting AI more than teachers, parents, or peers?
Is the growing role of AI in mental health a step toward empowerment, or dependency?
Helsinki University Hospital: "Milli" mental health chatbot for adolescents - UNICEF case study on an NLP-based chatbot supporting youth.
Cumberland County Schools (NC): Alongside AI support app rollout - District page outlining features, access, and parental consent.
NHS Lothian pilot with Wysa - Scottish Digital Health & Care Innovation Centre overview of a youth access pilot.
Kooth: assisted moderation using machine learning - Deck notes ML-supported moderation on a youth platform.
Therabot – Dartmouth Clinical Trial
In Dartmouth's first-ever randomized clinical trial of a generative-AI therapy chatbot - Therabot - participants with major depressive disorder, generalized anxiety disorder, or eating disorder risk experienced clinically significant reductions in symptoms:
Depression: 51% average reduction
Anxiety: 31% average reduction
Eating disorder risk: 19% average reduction
Participants also reported that they could trust and communicate with Therabot "to a degree that is comparable to working with a mental health professional."
Therapeutic alliance measures were statistically similar to those seen in traditional outpatient therapy.
This adds to existing examples like Woebot (Fitzpatrick et al. 2017) and the Wysa pilot with NHS Lothian, reinforcing the growing evidence base for AI-assisted mental health interventions.
Dartmouth News Release
Heinz MV, Mackin DM, Trudeau BM, et al. Randomized trial of a generative AI chatbot for mental health treatment. NEJM AI. 2025;2(4). https://doi.org/10.1056/AIoa2400802.
Crisis Text Line: ML triage of high-risk messages - Explains how ML prioritises imminent-risk texters for faster human response.
Fitzpatrick et al. (2017) JMIR: Woebot RCT with college students - Automated CBT chatbot reduced depressive symptoms over 2 weeks.
Swaminathan et al. (2023) npj Digital Medicine: NLP detects crisis in telehealth chats - Model integrated into clinical workflow.
Bress et al. (2024) JAMA Network Open: self-guided CBT app for young adults - Randomised trial evidence on symptom reduction.
Ni et al. (2025) Scoping review: AI-driven interventions for children and adolescents - Open-access overview of youth-focused AI mental health tools.
Broadbent et al. (2023): ML to identify suicide risk in crisis text - Open-access study on NLP risk detection.
UNICEF Policy Guidance on AI for Children (v2.0) - Practical requirements for child‑centred AI.
UK Parliamentary POSTnote 737: AI and mental healthcare - Concise evidence brief on opportunities and delivery considerations.
Alan Turing Institute: AI for precision mental health - Project overview on early prediction and personalised interventions.
CDT: Hidden Harms of student online monitoring - Research on privacy, equity and mental health risks in surveillance.
OECD: Digital activities and children’s well‑being chapter - Evidence on mental health in the digital age.
Webinars and Talks
Recordings on policy and practice, including how AI will affect children and their rights.
For more videos, see UNICEF Office of Global Insight and Policy: AI4Children webinars playlist
Alongside: Pulling Back the Curtain on Youth Mental Health 2025 - Free webinar using insights from 250k+ student chats.
IGF 2023 session: Implementing UNICEF’s AI for Children guidance - Multistakeholder panel with resources.
AI in mental health presents both opportunities and risks, extending from therapy chatbots to companion bots and classroom interactions. The key ideas show that psychological concepts are being reshaped by AI, while the debate highlights the tension between access and risk, empowerment and dependency. For education, the implications are profound: teachers, leaders, and students must build critical literacy to navigate these tools, balancing innovation with ethical care. By situating AI within established psychological frames and recognising its limits, educators can help ensure that its role in supporting wellbeing complements, rather than undermines, human relationships and professional expertise.
Looking ahead, the task for educators is to shape practices that use AI responsibly: embracing its potential to extend support while ensuring that human relationships and professional expertise remain central. By doing so, institutions can safeguard wellbeing without reducing care to a purely technological transaction.
Final takeaway: In the end, the question is not whether AI will shape mental health in education, but how educators choose to shape its role with critical care and human judgement.
We value all contributions to this page.
Please contact Alfina Jackson or Annelise Dixon on LinkedIn if you would like to contribute.