This site follows WCAG 2.1AA accessibility guidelines. For support or alternative formats, contact the AI and Accessibility project team.
By the end of this lesson, users will be able to:
Identify and describe the key features of AI tools that support accessibility and apply the 7 Core ALTAI Dimensions to assess their transparency, fairness, and ethical use.
Reflect and decide whether an AI tool is appropriate for use in digital learning environments based on its ethical alignment, accessibility, effectiveness, and ability to enhance inclusive course design.
AI offers exciting potential through personalized learning, faster feedback, creative support, but also new risks. Without critical evaluation, educators may unknowingly introduce tools that perpetuate bias, spread misinformation, or compromise student data.
Regular evaluation ensures that the tools we adopt align with our values of equity, fairness, and transparency.
Sal Khan explains how artificial intelligence can become a powerful partner in education when used responsibly. He describes how AI can personalize learning for every student, offering real-time feedback and support, while giving teachers more time to focus on creativity and connection. Khan highlights how tools like Khanmigo show that AI can strengthen human teaching rather than replace it, helping to make education more equitable, engaging, and effective.
In this TED Talk, educator Sal Khan explains how AI can strengthen, not replace, human teaching by enhancing personalization, creativity, and real-time feedback in education. Closed captions are available.
AI tools can enhance creativity, efficiency, and student engagement.
Yet, without careful evaluation, they may also reinforce inequities, misinform learners, or fail accessibility standards.
By regularly assessing AI tools, educators ensure that technology strengthens, rather than undermines, learning. equity, inclusion, and ethical learning design.
When choosing or using any AI tool, it’s important to think beyond how well it works. We also need to consider how transparent, fair, and accountable it is. The article “Transparency and accountability in AI systems: safeguarding wellbeing in the age of algorithmic decision-making” by Ben Chester Cheong (2024) explains that when AI systems make decisions we can’t see or understand, they can unintentionally cause harm, spread bias, or reduce trust. Educators and professionals should always ask: Is this tool transparent about how it works? Who is responsible for its output? How is user data protected? Thinking critically about these questions helps ensure AI tools are used safely, ethically, and in ways that support human well-being.
Optional Reading: Transparency and accountability in AI systems: safeguarding wellbeing in the age of algorithmic decision-making - Frontiers in Human Dynamics (2024) (Download Below)
Evaluating an AI tool does not have to feel overwhelming. This step by step guide walks you through a simple process that you can apply to any AI tool, whether it is new, widely used, or still being tested in your classroom.
Each step encourages you to slow down, think critically, and consider not only how the tool performs but how it aligns with your teaching values and learners’ needs. You will explore what the tool does well, where it might raise concerns, and how it measures up against the 7 ALTAI ethical dimensions such as fairness, transparency, and accountability.
By the end of this section, you will have a clear framework you can reuse whenever a new AI tool appears, helping you make confident and informed decisions that prioritize equity, safety, and meaningful learning.
Step One: Identify the Purpose
Figure out what I want this tool to do?
Clarify whether it’s for grading, writing support, feedback, idea generation, communication, accessibility, etc.
If the purpose isn’t clear, the tool may not fit your needs.
Step Two: Explore the Tool
Spend 5–10 minutes exploring its features and testing outputs.
What does it do well?
What limitations or risks do you notice?
Is it intuitive for you and your learners?
Step Three: Apply the 7 ALTAI Dimensions
The 7 Core ALTAI Dimensions provide a trusted and practical way to evaluate whether an AI tool is ethical, transparent, and supportive of positive learning outcomes. They were created to help everyday users, not just technical experts, make informed decisions about which AI technologies should be adopted.
This framework is valuable in education because it looks beyond what a tool can do and asks deeper questions about how it does it and who it benefits. By examining areas such as fairness, privacy, accountability, and human control, educators can better understand whether an AI tool will truly support all learners.
Using the ALTAI Dimensions encourages critical and reflective thinking, helping educators identify strengths, potential risks, and areas that may require ongoing monitoring. Since AI tools are constantly changing, this framework also supports continuous evaluation, empowering teachers to revisit and adjust their decisions as tools evolve.
Overall, the 7 ALTAI Dimensions help educators choose AI tools that align with their professional values and create trustworthy, inclusive, and responsible learning environments.
EXAMPLE OF A COMPLETED CHECKLIST IS BELOW THE TEST YOUR KNOWLEDGE SECTION ON THIS PAGE.
Human Agency & Oversight: Does it support user control?
Technical Robustness & Safety: Is it reliable and stable?
Privacy & Data Governance: How is user data collected and stored?
Transparency: Can users understand how the tool produces results?
Diversity & Fairness: Does it reflect inclusive values?
Societal Well-Being: Does it align with educational ethics?
Accountability: Is the developer transparent about limitations and updates?
Download the checklist below
Step Four: Test It
Use real examples from your teaching practice.
Try a student prompt or assignment question.
Look for bias, missing context, or errors.
Ask yourself, would this output support meaningful learning?
Step Five: Reflect and Record
Recording your evaluation is an essential part of building awareness and accountability in how AI tools are used. Taking time to write down your observations, insights, and questions helps you see patterns over time, what works well, what needs improvement, and where your confidence in a tool may shift as you learn more.
Reflection is not a one time step. Just because you have completed your evaluation once does not mean the work is finished. AI tools evolve quickly through updates, new features, or changes to privacy policies. Revisit your notes and checklist regularly to ensure your earlier conclusions still hold true.
By continuously recording and reflecting, you build a transparent record of your decision making and stay prepared to adapt when technology or classroom needs change.
Try this quick quiz to see how well you understand evaluating AI tools in education. Each question includes feedback to help guide your learning. This activity is just for practice and will help you feel more confident applying the ALTAI dimensions when choosing new tools.
References
Cheong, B. C. (2024). Transparency and accountability in AI systems: safeguarding wellbeing in the age of algorithmic decision-making. Frontiers in Human Dynamics, 6. https://doi.org/10.3389/fhumd.2024.1421273
Fedele, A., Punzi, C., & Tramacere, S. (2024). The ALTAI checklist as a tool to assess ethical and legal implications for a trustworthy AI development in education. Computer Law & Security Review, 53, 105986. https://doi.org/10.1016/j.clsr.2024.105986