This site follows WCAG 2.1AA accessibility guidelines. For support or alternative formats, contact the AI and Accessibility project team.
Upon successful completion of this course, students will be able to:
Explain Core Ethical Principles: Articulate and differentiate the foundational principles necessary for responsible AI use, including Fairness, Reliability and Safety, Privacy and Security, Inclusiveness, Transparency, and accountability.
Practice Transparent Disclosure: Implement clear practices for disclosing the use of AI tools in academic work, distinguishing between AI-assisted brainstorming and AI-written content.
Ensure Equitable Access: Design or adapt learning activities to ensure equitable access to age-appropriate, vetted AI tools and digital content for all student groups, thereby addressing the Digital Access and Use Divides.
Artificial Intelligence (AI) is shaping decisions in education, healthcare, business, and daily life.
To use AI responsibly, we must understand the ethical principles that ensure technology benefits everyone fairly and safely.
This module introduces six key principles that form the foundation of ethical AI use.
Instructions:
1.Choose one of these short case studies:
-An AI writing assistant that scores essays.
-A facial recognition app used for student attendance.
-A chatbot used for college advising.
2.In small groups or individually, identify where ethical principles are followed or violated.
3.Discuss: Which principle seems most difficult to maintain in this case? Why?
A university uses AI to flag potential plagiarism. The tool incorrectly marks several non-native English speakers’ papers as “high risk.”
Prompt: Which ethical principles are at risk here? How could the institution improve this system?
This video, "What is AI Ethics?" by IBM Technology, explores the critical importance of earning trust in artificial intelligence. The presenter outlines five key pillars for trustworthy AI: Fairness, Explainability, Robustness, Transparency, and Data Privacy. The video emphasizes that AI ethics is a socio-technological challenge, requiring a holistic approach that focuses on the organization's People (Culture), Process (Governance), and Tooling.
This video from UNESCO discusses the Ethics of AI: Challenges and Governance, emphasizing that while AI has the potential to empower people, it can also widen inequalities. The speakers argue that responsible governance cannot rely solely on consumers, but requires pushing the responsibility back onto designers and organizations. They stress the need for sound regulatory frameworks and inclusive global dialogue to ensure AI systems protect human rights and deliver on human goals
Example: “Transparency is important when I use ChatGPT in class because students should understand how the tool generates information.”
Transparency is a key part of ethical AI use. In academic and professional settings, it is essential to clearly explain how AI tools were used to create or support work.
This module explores ways to disclose AI use responsibly, helping learners maintain academic integrity, trust, and fairness in their writing and projects.
The framework guiding this module is based on Weaver (2024), who developed the Artificial Intelligence Disclosure (AID) Framework. A model that helps writers describe AI use in clear, consistent, and ethical ways.
Simple Disclosure Categories
1.Choose a short writing task (e.g., an email draft, discussion post, or paragraph).
2. Use an AI tool (ChatGPT, Grammarly, or Google Gemini) to support your work.
3. Then write a 1–2 sentence disclosure describing how you used the AI tool.
Example:
“ChatGPT helped me brainstorm examples for my introduction, but all analysis and writing are my own.”
Post your disclosure in the discussion forum or class Padlet to compare approaches.
This video from The Dissertation Coach (Official) explains how to acknowledge the use of generative AI like ChatGPT in academic writing. It clarifies that you should acknowledge it, rather than "cite" it, since AI doesn't cite its own sources. The best practice is to include a statement in your work describing the specific tool used (including the version/date) and providing explicit details about how it was used, such as for organizing, brainstorming, or generating text.
A graduate student uses an AI writing assistant to generate most of their paper but does not mention it. Their instructor notices inconsistencies in tone and asks for clarification.
Discussion Prompt:
Which ethical principles are being violated here?
How could the student have applied the AID Framework to prevent this situation?
Mini Assignment: AI Use Disclosure Statement
Write a short (100–150 word) disclosure statement for a recent assignment or project, following the AID Framework.
Explain:
1.Which AI tools you used
2. How you used them
3. What parts of the work remain fully your own
AI tools are rapidly entering classrooms and universities however, not every student has the same opportunity to access, understand, or benefit from them.
Ensuring equitable access means giving all learners, regardless of background or resources, fair opportunities to use AI in meaningful and ethical ways.
Instructions:
1.Choose an AI-powered educational tool (e.g., ChatGPT, Grammarly, or Google’s Gemini).
2.Evaluate it using these three equity-focused questions:
-Is the tool free or affordable for all learners?
-Does it support accessibility (e.g., language options, visual/audio aids)?
-Is the data use ethical and transparent?
3. Summarize your findings in a short post or slide: “How equitable is this tool, and what could make it more inclusive?”
This TEDx Talk by Jim Sevier focuses on Bridging the Digital Divide, which he visualizes as a "gap" between those who can access and understand the digital world (the "train") and those who cannot. Sevier argues that the primary barriers are lack of access and lack of understanding (literacy and technical skills).
A university introduces a paid AI note-taking assistant. Students with subscriptions gain higher grades, while others struggle to keep up. Faculty begin to question whether the tool is fair.
Discussion Questions:
-What ethical issues are raised in this example?
-How might the institution ensure equitable access to AI support tools?
-Which digital divide (access or use) is most visible here?
Mini Project: Designing for Equitable AI Access
Create or adapt a short learning activity (classroom, training, or online module) that includes an AI tool.
In 150–200 words, explain:
1.How the activity ensures equitable access for all learners.
2.How you verified the tool’s ethical and age-appropriate use.
3.Which digital divide(s) your design helps to reduce.
References
Al Maharmah, A., Elfeky, A., Yacoub, R., Ibrahim, A., & Nemt-allah, M. (2025). Measuring ethical AI Use in higher education: Reliability and validity of the AI academic integrity scale for postgraduate students. International Journal of Innovative Research and Scientific Studies, 8(4), 707–715. https://doi.org/10.53894/ijirss.v8i4.7928
Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
IBM Technology. (2021, September 30). What is AI Ethics? [Video]. YouTube.https://www.youtube.com/watch?v=aGwYtUzMQUk
Sah, R., Hagemaster, C., Adhikari, A., Lee, A., & Sun, N. (2025). Generative AI in higher education: Student and faculty perspectives on use, ethics, and impact. Issues in Information Systems, 26(2), 373–386. https://doi.org/10.48009/2_iis_129
Sevier, J. (2017, May 23). Bridging the Digital Divide [Video]. Tedx Greenville. Tedx Talks. YouTube. https://www.youtube.com/watch?v=fzokRz1pgb0
UNESCO. (2023, February 6). Ethics of AI: Challenges and Governance [Video]. YouTube. https://www.youtube.com/watch?v=VqFqWIqOB1g
Weaver, K. D. (2024). The artificial intelligence disclosure (AID) framework: An introduction. College & Research Libraries News, 85(10), 407. https://doi.org/10.5860/crln.85.10.407