Humans are biased as inviduals and as a society. We hold, show and feel biases in our daily lives. It's not unexpected, then, that GenAI mirrors these biases. Upon first meeting someone maybe you'll consciously (or subconsciously) make a set of assumptions about that person based on their looks, speech or behaviour. We're in that exact zone with GenAI right now. Whereas your higher brain function may mean you can be aware of and correct for your biases, GenAI is current devoid of this seemingly simple, yet massively complex, oversight and correction system.
What follows is an introduction to GenAI biases that can affect education, and a consideration of how these can affect equity for learners.
Biases can be intrinsic or extrinsic, basically this means biases you are unaware of and biases you are aware of. Put more simply, biases you are aware of (extrinsic) or biases you are not aware of (intrinsic). This hold for both inputs and outputs, which can hold, use and demonstrate different implicit and explicit biases.
A definition we could use for GenAI in education is:
Bias is a systematic tendency, conscious or unconscious, that leads individuals, groups, systems, or processes to favour certain perspectives, people, or outcomes over others. This tendency can come from personal attitudes, cultural habits, historical inequalities, or the design and data of tools and algorithms. In practice, bias can distort thinking and decision-making, shape the information we see and how we understand it, affect the fairness of education and assessment, and lead to unequal outcomes in society and the construction of knowledge. Bias can be easy to see or it can be hidden, and it may be present in individuals, institutions, data, or technology. Recognising and managing bias is essential for creating fair, accurate, and inclusive environments, whether in the classroom, research, policy, or everyday life (Hedlund, 2025).
There are many categories of bias that appear in GenAI outputs. Here are a few examples with images of output. The categories here are not exhaustive. Look at the images created from the prompts below. What evidence of biases can you find?
"Create an image of children learning physics"
Gemini 2.5 Flash
"Create an image of a class of children!
ChatGPT5
"Show a group of 20 people"
Gemini 2.5 Flash
The biases you find in your outputs are the outcomes from both the quality of your input and the design of the models. You can't change the design of the models, but you can be aware of the biases they exhibit, how these evolve, and how you mitigate against them. These are excellent learning points to involve students with, too.
What is in your control is to learn about types of biases that can be exhibited. Remember, bias is intersectional, so you'll often see many biases roled into one - such as the older, white, grey-haired, able-bodied, female physicist above.
The deeper you study these images, the more you will see the assumptions about learners and their education that emerge, for example:
Learning happens in classrooms, not outside.
Learning happens in a formal, instruction-based envronment, on tables.
Learning is collaborative and involves writing things down.
Learning involves the transmission of knowledge (look how the desks are placed).
Expressive clothing and individuality is not usual.
Learners are always able-bodied.
The list could go on.
Once you are aware of possible biases, you can begin the process of mitigating against them. This is a useful skill or process to involve students in. There are a series of quick wins you could employ:
Ask for diversity. The more examples of how you want your image to be diverse, the more diversity you will get.
Ask for what you don't want included.
If using anything that will trigger assumption, such as 'make it interesting' provide specific details of learner interests or context.
The foundational layer. Completely unmitigated.
"Show a group of 20 people"
Note the assumptions here:
Age
Ability
Clothing
Situation
Now we start with the mitigation
"Show a group of 20 people from diverse backgrounds "
What changes?
What stays the same?
What new biases are introduced?
We build on the layers. Notice the interplay.
"Show a group of 20 people from diverse backgrounds representing different cultural heritage "
Again, reflect on:
What changes?
What stays the same?
What new biases are introduced?
Now we start to layer in the body image constructs.
"Show a group of 20 people from diverse backgrounds, representing different cultural heritage and a variety of body styles "
Again, reflect on:
What changes?
What stays the same?
What new biases are introduced?
This layering is more specific on metrics for body styles.
"Show a group of 20 people from diverse backgrounds, representing different cultural heritage and a variety of body styles and BMIs "
Again, reflect on:
What changes?
What stays the same?
What new biases are introduced?
Now we bring in age, a known bias.
"Show a group of 20 people from diverse backgrounds, representing different cultural heritage, variety of body styles, BMIs and ages. "
Again, reflect on:
What changes?
What stays the same?
What new biases are introduced?
Now we layer in measures of disadvantage.
"Show a group of 20 people from diverse backgrounds, representing different cultural heritage, variety of body styles, BMIs, ages and privilege. "
Again, reflect on:
What changes?
What stays the same?
What new biases are introduced?
You'll notice that by now, the intersectionality is becoming quite complex and some of the either diversity markers have dropped off. How could this impact learners?
So far we've talked about Chatbot use, controlled by the educator, but what about personalised learning systems?
Personalised learning systems can infer learner context, descriptors and metrics from any interactions, uploaded data and even things like a name on an uploaded piece of work. How can you ensue the Edtech you use has taken biases into account and mapped them? A lot haven't, this is why I am developing the EdTech Transparency and Oversight Leaderboard - where I will deep dive a series of major tools and rank their oversight based on a series of metrics.
If you or your students are using GenAI, you told responsibility for the oversight of your and your student's output.
That means if you use an image that contains biases, you are subconsciously making a decision on what 'normal' means for them. In a role of authority, your idea of normativity could set the paths of your learners on different trajectories.
Experiment. Play. In the way I have done above.
What works for your context? How can you represent and include the context of your learners?
How should you involve them in this learning and experimentation process?
These are all questions you will need to spend time reflecting on and discussing with your institution.
Always, always, be #BiasAware.
We have a community developed book on AI Bias in Education coming out on December 1st. You can find out more here.
GenEd Labs, 2025. Resources. Available at: https://genedlabs.ai/resources/ [Accessed 11 September 2025].
Hedlund, V. (ed.), in press. AI Bias in Education: Performing Critical Oversight: Perspectives and Practical Approaches for Educators. Forthcoming: GenEd Labs.
Hedlund, V., 2025. Recognizing and Managing Bias. In: D. Fitzpatrick, ed. The Educators’ AI Guide 2026. Thirdbox Publishing.
Ho, Q. H. J., Hartanto, A., Koh, T. K. & Majeed, N. M., 2025. Gender biases within Artificial Intelligence and ChatGPT: Evidence, sources of biases, and solutions. Computers in Human Behavior, 4, pp. 1-15. Available at: https://www.sciencedirect.com/science/article/pii/S2949882125000295 [Accessed 11 September 2025].
UNESCO, 2025. Tackling Gender Bias and Harms in Artificial Intelligence (AI). [online] UNESCO. Available at: https://www.unesco.org/en/articles/tackling-gender-bias-and-harms-artificial-intelligence-ai [Accessed 11 September 2025].
We value all contributions to this page.
Please contact Alfina Jackson or Annelise Dixon on LinkedIn if you would like to contribute.