This site is still in progress. I’m building it to share my thoughts and studies, but for now, it’s mainly used for my MET coursework.
Sean Jeon
Updated 2025.06.21
Sumitted 2025.06.21
In today's digital age, artificial intelligence (AI) is transforming education through personalized and adaptive learning. This is especially true in immersive environments like extended reality (XR), where students engage with content in virtual or augmented spaces. However, these systems are not neutral. They carry serious ethical concerns, including bias, surveillance, and threats to student privacy. In this Critical Learning Task (CLT #5), I explore these concerns using Educational Digital Identity and Agency (EDIDA) as a lens. I analyze one case study and three scholarly sources to propose a design solution that supports fairness, transparency, and learner voice in AI-driven XR classrooms.
The case study by Rane, Choudhary, and Rane (2023) describes how AI is used in Education 4.0 and 5.0 to offer personalized learning. Students interact with intelligent tutors and emotion-tracking systems that adapt content to their performance. While this can improve efficiency, it raises ethical questions. For example, predictive models may limit student choices by deciding what content is “best” for them. This undermines their agency, a key part of EDIDA, because students are not choosing their learning paths—algorithms are.
This approach also risks bias. As Bristol and Shawn (2020) explain, algorithmic systems often reflect the biases of their creators. If the training data lacks diversity, marginalized students may be unfairly labelled or directed toward easier tasks, thereby reinforcing the Dunning–Kruger effect. Students with lower skills may become overconfident because the system avoids challenging them. High-performing students might doubt themselves if the system reduces task difficulty. In both cases, AI influences a learner's digital identity in ways that may not accurately reflect their true abilities.
Immersive technologies like VR and AR track a lot of personal information, including body movement, eye gaze, and even emotional responses. Pahi and Schroeder (2023) argue that these systems collect not just personal but also biometric data. Students may not know they are being watched so closely or how that data is used. This kind of hidden surveillance violates the principle of informed consent. It also affects how students behave. If they feel constantly monitored, they may not take academic risks, reducing creativity and confidence.
For culturally diverse students, the risks are even higher. Many AI systems are trained on Western-centric data. This can cause misinterpretations of behaviour or learning preferences. For example, students from cultures that value group harmony may appear passive in systems that reward assertiveness. As a result, the system may rate them unfairly. Their cultural identity is ignored, and their digital identity is misrepresented. This is a form of digital colonialism that must be addressed.
Another serious concern is environmental scanning in mixed reality (MR) settings. AI-powered MR tools often capture surroundings to anchor digital objects or identify room features. However, these scans may unintentionally record people, conversations, or private spaces without consent. This becomes even more troubling in public or semi-public educational environments, where bystanders—such as parents, siblings, or classmates—can be passively recorded or analyzed by AI systems without any formal consent process. This violates both privacy and ethical boundaries, especially when individuals are unaware that their data is being processed.
This chart is generated by napkin.ai based on the above writing
A major challenge is giving students control over their learning while still benefiting from AI. One solution is to design systems that include "AI dashboards" where students and teachers can see why a certain task was given. Another strategy is to allow privacy "pause" buttons during XR sessions, so students can opt out of tracking temporarily. Also, bias audits can be built into the system to monitor fairness across student groups.
To further address the intersection of predictable learning and algorithmic bias, SafeXR systems should incorporate:
Explainable AI Dashboards: Students and teachers can view clear reasons for learning path decisions in simple language or icons, supporting metacognitive reflection.
Human-in-the-Loop Systems: Teachers can override AI suggestions, and students can flag content that feels inappropriate or unfair.
Bias-Aware Training Data: Developers must use culturally diverse datasets and regularly conduct fairness checks.
Multiple Learning Pathways: Students should be able to choose from various routes, like visual, gamified, narrative-based, not just the one AI predicts as “best.”
Avatar Privacy Indicators: Avatars display visual cues like earplugs or shields to show what data is tracked or paused.
Student Reflection Prompts: Learners can reflect on AI choices, “Why do you think you were given this task?” and challenge them.
Environmental Scan Permissions: XR systems must include prompts and opt-outs for users before scanning physical environments. Alerts should trigger if bystanders enter the scan range.
These strategies support inclusive design and protect educational digital identity and agency in complex, data-rich XR environments.
My proposed prototype is a VR learning environment with six key features:
Consent Prompts: Students choose what data to share using easy-to-understand icons and voice guidance.
AI Transparency Dashboard: Shows why learning paths were suggested.
Bias Flag System: Detects unfair treatment or content gaps and lets students share their feelings in real time. Students can press a button or speak their reaction: for example, “This feels unfair,” or “I don’t see myself in this content.” The system logs this input for teachers or AI review and helps identify hidden bias or emotional discomfort. It also trains the system to become more inclusive over time.
Privacy Pause Mode: Lets students stop tracking at any time.
Avatar Privacy Indicators: The student’s 3D avatar visually reflects which types of data are currently being collected. For example:
Earplugs = sound/mic off
Sunglasses = eye tracking disabled
A shield = full privacy mode These symbols support ambient awareness and transparency without breaking immersion.
End-of-Session Report: Explains what was tracked and used.
These features support digital identity and agency. They also reduce bias and surveillance, making the XR classroom safer and more inclusive.
The demo storyboard for the Bias Flag System was created based on the user persona’s job-to-be-done and prioritized using the impact–effort framework. To see the Miro board, click here (https://miro.com/app/board/uXjVIliFvAc=/?share_link_id=473216080622)
AI in XR has great potential, but we must use it carefully. Predictive systems can manipulate student choices, reinforce biases, and threaten cultural diversity. EDIDA reminds us that students are not just data points—they are learners with identities and rights. This SafeXR Classrooms prototype can be a step toward ethical and inclusive design that protects those rights while promoting innovative learning.
If a student chooses to pause all AI tracking and data collection, how can we ensure they are not at a disadvantage compared to peers who receive full AI personalization and support?
To what extent can explainable AI dashboards truly promote agency in learners with limited digital literacy or language barriers?
How do we ensure environmental scan permissions in MR environments are enforced fairly and technically reliably, especially in dynamic public or semi-public educational spaces?
Rane, N., Choudhary, S., & Rane, J. (2023). Education 4.0 and 5.0: Integrating Artificial Intelligence (AI) for personalized and adaptive learning. Available at SSRN 4638365.
Bristol, D., & Shawn, Z. (2020) Artificial intelligence in education: Should students pay the price for algorithmic bias. AI and Society. https://doi.org/10.1007/s00146-020-01054-3
Government of Canada. (2022, July 12). Artificial intelligence is here series: Talking about bias, fairness, and transparency (DDN2-V23). https://www.csps-efpc.gc.ca/video/artificial-intelligence-here-series/bias-fairness-transparency-eng.aspx
Pahi, S., & Schroeder, C. (2023). Extended privacy for extended reality: XR technology has 99 problems and privacy is several of them. Notre Dame J. on Emerging Tech., 4, 1.