School leaders:
Conduct a review of Policy to ensure ethical and equitable practices
Implement ethical considerations into staff professional learning
Consider how to leverage AI to improve outcomes for learners and families with barriers
Reach out to school leaders and educators beyond your community to support equitable outcomes for all students
Teachers:
Reflect on how to embed ethical and equitable considerations into your teaching practices
Investigate the key ideas further by engaging in additional research
Identify structures and practices that can support learners with barriers in your classroom
Develop professional learning partnerships with educators beyond your context to share resources and strategies
Academics:
Identify gaps in the literature around supporting learners with barriers to direct your research and advocate for equitable education outcomes
Industry Professionals and Developers:
Understand the barriers that exist in supporting equitable educational outcomes for all students
Develop free tools and resources for educators supporting students with barriers (Disability, Learning Difficulties, English as an Additional Language/Dialect, Rural and Remote, Mental Health Needs, Low Socio-Economic Environments, First Nations)
Identify areas to support funding for schools, teachers and students in low-resource contexts
You may want to conduct a professional learning session with your staff to provide an introduction to some of the ethical considerations and equity concerns raised. Review the key ideas in consultation with your school policy to ensure ethics and equity are key features of school governance. Consider how to communicate the key ideas raised with the parent and student community at your school via presentations and discussions.
As you review the key ideas, consider these reflexive and reflective questions.
1. To what extent will understanding these ethical considerations impact your practice?
2. How can you mitigate risks such as the Eliza Effect for your students?
3. How can you leverage AI to improve outcomes for learners with barriers?
Bias
AI systems are only as fair as the data they are trained on—and much of that data reflects historical and systemic inequities. For example, if history has largely been written by white, Western voices, then AI outputs will likely reflect those perspectives. This creates a fundamental tension when Indigenous knowledge is excluded or misrepresented. Different First Nations or Indigenous groups may have conflicting beliefs, and some cultural knowledge is sacred, never intended to be shared or digitised. AI cannot account for this nuance.
Gender and racial biases are also embedded in generative models. A simple request for an image of a "doctor" often returns a white, middle-aged man. Requests for leadership roles may similarly skew towards white or male imagery, while terms like "nurse" or "assistant" may default to female figures. Such patterns reinforce stereotypes and marginalise diverse identities. These issues extend to accent bias in speech recognition systems, facial recognition disparities, and language prioritisation in multilingual settings. Equity in AI means addressing not just representation, but power—whose voices are amplified, and whose are omitted.
The Eliza Effect (or Artificial Companionship Bias)
The Eliza Effect, sometimes referred to in modern terms as Artificial Companionship Bias, is the tendency for humans to anthropomorphise AI—projecting understanding, empathy, or even emotional connection onto machines that do not possess these traits. For vulnerable people, this is particularly dangerous.
If we educate primary school children about mental health and responsible AI use, there is no guarantee they will recall or heed that knowledge as adolescents—when they are more vulnerable to isolation, identity struggles, or suicidal ideation. During these critical developmental stages, if an AI responds with what seems like understanding, students may turn to it for comfort, intimacy, or validation, rather than to trusted adults.
This creates a psychological risk: young people may form unhealthy attachments to systems incapable of true care, misunderstanding the boundaries of AI’s capabilities. In extreme cases, it may delay access to human support, exacerbate feelings of rejection, or reinforce harmful ideas through echo-chamber dynamics.
In this case, education alone is not enough. While AI literacy is essential, so too are safeguards, transparent disclosures, co-designed policies, and ongoing mental health support structures that recognise the emotional and social dimensions of AI-human interaction. Schools must be vigilant not to outsource pastoral care to technologies that can simulate—but never replicate—genuine human connection.
Sustainability
The development and deployment of AI systems demand significant computational power, contributing to substantial energy consumption and carbon emissions. Training large language models, for instance, can emit as much carbon as several cars over their lifetime. For educational systems already under financial and infrastructural pressure, this raises critical environmental and economic concerns.
Schools and systems must weigh the pedagogical benefits of AI against its ecological cost. This includes considering how often AI is used, what kind of models are deployed, and where and how data is processed. Sustainable practices may involve adopting energy-efficient models, using local processing to reduce cloud dependency, and exploring shared or open-source infrastructure rather than relying on high-cost, high-emission commercial platforms.
Beyond implementation, maintaining AI systems over time demands long-term planning for regular updates, retraining, and auditing—all of which carry energy and resource implications. A truly sustainable approach to AI in education requires not just responsible digital practices but also climate-conscious policies, where educational innovation does not come at the cost of planetary health.
Corporate Infiltration into Education
The rapid commercialisation of AI in education has led to an influx of private companies offering AI-driven solutions, often without sufficient evidence of educational benefit or alignment with curriculum standards. This raises questions about data privacy, student surveillance, and the commodification of learning. Educators and policymakers must ensure that the adoption of AI prioritises pedagogical value over profit, and that student data is protected through robust governance and transparency.
The increasing involvement of private companies in AI-based educational tools marks a significant shift in the landscape of schooling. This commercial entry, often referred to as edtech privatisation or the marketisation of education, raises critical questions about equity, transparency, and long-term control of educational priorities. Many AI platforms are developed by for-profit companies whose interests may not align with those of public education systems. This includes concerns about:
Data ownership and student privacy: When student data is processed by third-party tools, who owns that information, and how is it being used or monetised?
Equity of access: Premium AI services are often only available to wealthier schools or regions, exacerbating existing inequalities.
Influence on pedagogy: Commercial tools may come embedded with unexamined pedagogical assumptions that influence how learning is structured, often prioritising measurable outcomes over holistic education.
Dependency and sustainability: Schools can become reliant on commercial platforms that may be discontinued, change terms of use, or raise costs, leading to instability in core learning systems.
This shift echoes broader concerns about the platformisation of education, where a few large companies control not just content delivery but also assessment, data analytics, and curriculum alignment. If not carefully managed through regulation and ethical guidelines, commercial AI tools could shift the power over education from public educators and communities to private corporations.
Data Privacy
The use of AI in education introduces serious concerns about data privacy, particularly when student information is collected, stored, and analysed by third-party systems. AI tools often require access to personal data to function effectively—such as student demographics, learning behaviours, assessment responses, and even emotional cues—raising critical questions about consent, control, and long-term use.
A major issue is the creation of digital footprints from a young age. As students interact with AI tools throughout their schooling, extensive profiles are built that may include learning patterns, behavioural data, and even predictive indicators. These footprints can influence future decisions about academic pathways, interventions, and opportunities—often without students or families fully understanding the implications. Once data is collected, it can be difficult to control or delete, especially if it is used to train models or shared across platforms.
In many cases, schools may not fully understand how data is being used or shared, especially when engaging with commercially developed AI tools. Some platforms may aggregate and monetise data, use it to train their models, or share it with partners without clear consent. This puts students, especially minors, at risk of surveillance, profiling, and long-term digital vulnerability.
Additional concerns include:
Lack of clarity around how long data is stored, and whether it can be deleted
Difficulty ensuring compliance with local or national privacy laws, especially when data is processed offshore
Risk of re-identification, even with anonymised data, particularly when combined with other datasets
Limited transparency for parents, students, and educators around what is collected and why
In educational settings, protecting student data is not just a legal obligation—it is an ethical one. Clear data governance frameworks, informed consent processes, localised data storage, and strict access controls are essential. Educators and policymakers must ensure that AI tools prioritise student wellbeing and uphold privacy as a fundamental right, not an afterthought.
The digital divide is ever increasing. We need to consider both Equity of Access to AI Literacy and Ethical, Sustainable Structures as well as Equity of Access to Resources.
How do we measure the impact of both of these areas on the digital divide? Do we identify the benefits and risks of both or separate them i.e. the impact of improving policy/structures etc. and the impact of increasing resources?
Access to quality PL, school policies, AI Literacy education etc
Equity is required in terms of consistent access to ethical and sustainable AI integration practices. The focus here is equity in the distribution of AI knowledge/policy/structures.
This means that every school would:
consult policy i.e. Australian Framework for GenAI and Departmental policies
create a school policy aligned with these
establish a strategic direction/IPM include: tools used, when and how they are to be used (benefits/ limitations possibly models that align with pedagogy and school processes i.e. assessment), risk management procedures (data privacy etc), professional learning timeline, communication strategy including students and parents, feedback loop on efficacy and challenges
Engage in PL beyond school and develop communities of practice to share and collaboration
The hypothesis being that if this was happening across schools there would be improved equity in AI literacy and fluency.
There is a need for equity of resources to support AI development i.e. hardware, internet access, staff, access to teacher PL and teaching materials, appropriate school conditions for students to learn effectively. Review some of the challenges and needs in our AI in Low Resource Contexts section