The XR Accessibility Workshop offers an engaging and interactive program designed to inspire and empower participants in advancing accessibility in Extended Reality (XR). The agenda will feature:
Sessions on Lived Experiences: Offering firsthand insights into the real-world challenges faced by users with disabilities, helping to expose problem areas and highlight the importance of accessible design.
Poster Presentations: Showcasing innovative ideas, solutions, and research related to XR accessibility in a collaborative, visual format.
Interactive Panels and Discussions: Engaging sessions with key stakeholders from academia, industry, and advocacy to explore challenges, opportunities, and collaborative approaches to making XR more inclusive.
With a focus on dialogue, hands-on demonstrations, and practical collaboration, the workshop is a unique opportunity to connect, learn, and contribute to shaping a more inclusive XR future.
14.00 - 14.45
Introduction & Interactive Session
14.45 - 15.45
Poster Session
15.45 - 16.15
Conference-wide catered Coffee break
16.15 - 17.45
Panel: More Than Theoretical - The Real Stakes and Challenges of XR Accessibility Research
XRAccessibility Workshop
Harshadha “Harsha” Balasubramanian
is a PhD candidate at the Centre for Digital Anthropology (UCL) and, as an intern, led Microsoft Research’s groundbreaking “Scene Weaving” project on non-visual VR access.
Atieh Taheri
is a Presidential Postdoctoral Fellow at Carnegie Mellon University’s Human-Computer Interaction Institute. She earned her PhD in Electrical and Computer Engineering from the University of California, Santa Barbara. Her research centers on accessibility and the lived experiences of people with disabilities, creating innovative user-centered solutions through participatory design methods.
Nigel Newbutt
is a B.O. Smith Research Professor and Director of the Equitable Learning Technology Lab at the University of Florida. His research explores technologies for autistic and neurodivergent populations, focusing on virtual reality headsets to support daily life. He emphasizes user input to shape inclusive, practical VR applications.
Melissa Malzkuhn
is a 3rd generation Deaf innovator, and social entrepreneur who works in advancing sign language learning, experience, and access as a human right. Melissa is the founder and director of Motion Light Lab at Gallaudet University and leads creative R&D and the creation of fluent signing 3D characters.
Jianye Wang
is a Deaf 3D Animator with expertise in Maya, Unreal Engine, and Unity. With many years of work experience, he has contributed to 3D sign language animation projects, as well as AR/VR interactive development.
17.45 - 18.00
Wrap-up & ideas for future directions
Enhancing XR Accessibility through Anthropometrically Diverse Biomechanical Simulation
Takeshi Miki, Florian Fischer, John J. Dudley, Per Ola Kristensson
Biomechanical user simulations can complement efforts to enhance XR accessibility by enabling rapid evaluation of alternative design choices across a wider range of user body types and capabilities. However, it is unclear to what extent current biomechanical simulation policies generalize to different body shapes. To address this gap, we investigate how biomechanical models for mid-air selection perform under anticipated diversity in user anthropometry. Using a musculoskeletal simulator with reinforcement learning (RL), we analyzed how variations in arm length influence model performance. The results reveal that existing RL policies are applicable only to the narrow range of anthropometric conditions for which they were trained, with significant performance deterioration outside this range. Our study highlights the importance of developing flexible and general biomechanical models capable of representing anthropometrically diverse users.
Investigating Visual Attention for Simulated Vision Loss and Central Vision Loss in Virtual Reality
Sreynit Khatt, Bobby Bodenheimer
Little is known about how individuals with low vision attend to 3D environments in a virtual environment (VE). This study investigates the visual attention of normal vision under a simulated vision loss and real central vision loss in a VE. We present a qualitative comparison of the saliency maps of visual attention generated from eye-tracking data of participants with normal vision under no loss and simulated low vision, saliency maps generated by a participant with low vision with and without corrective spectacles, and a saliency map generated by a computational prediction model.
AVA: An Audio-based Virtual Aiming System for Accessible VR Shooting Games
Florian Apavou, Tifanie Bouchara, Patrick Bourdot
Video games provide a rich source of entertainment and social interaction, yet they remain largely inaccessible to blind and visually impaired individuals (BVIs). AVA is an Audio-based Virtual Aiming system designed to guide BVIs in shooting video games using sonification methods in Virtual Reality (VR). Our current goal is to design sonification methods that simultaneously optimize the accuracy and speed of shooting tasks in VR games for BVIs. We are currently conducting two successive experiments comparing three sonification methods (Pitch, Tempo & Pitch, and Binary Pitch & Tempo) and two configurations: one exploring the impact of adding a sound at the center of targets, the other examining different sound spatialization approaches to facilitate target localization.
Designing Accessible XR for Neurodegenerative Disease Patients: Insights from Parkinson’s Disease Case Study
Daria Joanna Hemmerling, Paweł Jemioło, Mateusz Danioł, Marek Wodziński, Jakub Kamiński, Magdalena Igras-Cybulska, Magdalena Wójcik-Pędziwiatr
XR systems hold transformative potential for healthcare, offering immersive and patient-centered environments that cater to the specific needs of individuals with neurodegenerative diseases. These systems should enhance accessibility, usability, and engagement through intuitive interfaces, multimodal feedback, and controlled environments, improving the quality of data collection while reducing patient anxiety and fostering cooperation during assessments. This paper discusses the accessibility, user experience, and usability of an XR system developed using Microsoft HoloLens 2 for Parkinson’s Disease patients. It integrates multimodal data collection to evaluate motor functions, speech, cognition, gait, and gaze patterns. Insights from quantitative metrics and qualitative user feedback provide an understanding of the system’s strengths and areas for improvement.
Simulating Central Vision Loss for Multisensory Navigation in Virtual Environments
Maggie K. McCracken, Maisha Tahsin Orthy, Bobby Bodenheimer, Jeanine Stefanucci, Sarah Creem-Regehr
Low vision is a prevalent health issue that restricts individuals' accessibility to virtual reality (VR) technology due to reduced visual input. In this study, we developed a low vision simulation to model central vision loss, a symptom of eye conditions like macular degeneration. Participants completed a navigation task using both visual and body-based cues under normal and simulated low vision conditions. Results indicate that simulated central vision loss alters sensory cue use during navigation. These preliminary findings highlight the potential of low vision simulations for future research, including simulating different types or amounts of vision loss and aligning simulations with the experiences of people living with low vision.
Feasibility of Real-Time 3D Object Detection for Accessibility Use Cases
Muhammad Haj Ali, Rafael Damouni, Amjad Nassar, Ilan Shimshoni, Sarit Szpiro
Augmented Reality (AR) holds promise for improving visibility and accessibility of real-world objects. AR systems, however, often rely on 2D overlays or marker-based methods, which face limitations such as poor camera performance in low-light conditions. We present a new approach to improve visual accessibility in AR. We developed a Unity-based application for the HoloLens 2, integrating Azure Object Anchors (AOA) for real-time 3D object detection with depth sensors. Using pre-scanned 3D models, the application detects and then projects virtual overlays on objects to increase their visibility while also logging eye gaze and spatial movement to enable evaluation of user interactions.
Accessible VR Social Stories in Education for Neurodiverse Students
Estella Oncins
Immersive environments have the potential to revolutionize education by enhancing motivation and enriching learning experiences. Recent research suggests that VR-based experiences can be particularly beneficial for neurodiverse students, aiding their psychosocial integration and skill-building. Still, questions related to the accessibility of VR-provided solutions in educational contexts remain unsolved. This paper presents the InclusiVRity project, an EU-funded initiative aiming to provide teachers and caregivers with accessible VR tools and materials for application in educational contexts and learning activities tailored to neurodiverse students.
Accessible VR Museum Experience Tailored to User Needs
Brigida Bonino, Franca Giannini, Marina Monti, Katia Lupinetti
This study aims to investigate and discuss methods for increasing accessibility in virtual reality (VR) applications to provide an inclusive immersive experience. Indeed, accessible VR promotes the setting-up of scenarios that can be adapted in all their aspects according to users' needs, in order to overcome issues deriving from sensory and motor impairments, as well as cultural differences. In particular, this contribution applies these methods to design and develop a virtual museum experience, where both the environment and interactions are adapted to visitors' physical and personal information. This customization enhances comfort, ease of use, and intuitiveness, ensuring that the experience meets each user's specific needs and preferences.
The Impact of Different Obstacle Outline Enhancements in Augmented Reality on Walking Experience
Lior Maman, Ido Yarkoni, Ilan Vol, Shachar Maidenbaum, Sarit Szpiro
Improving the visibility of obstacles can enhance mobility, a significant challenge for various populations in different scenarios. As a first step in developing such a system, we examined the experience of walking an obstacle course in typically sighted participants across four conditions—passthrough, partial augmentation (physical obstacle outline augmented), full augmentation (virtual obstacles fully overlaid on physical obstacles), and virtual-only objects. Walking time was significantly slower in the partial condition. Interestingly, although obstacle visibility differed across conditions and affected walking speed, participants did not notice this difference and rated the various augmentations similarly.
Demonstrating SeeSing: Scoring Spontaneous Encounters for the Visually Impaired with Smart Glasses
Shan Luo, Botao Amber Hu
Can the visually impaired (VI) "hear" others' smiles? This short demonstration paper employs a research-through-design methodology to envision AI-driven melodic auditory augmentation beyond traditional descriptive approaches for spontaneous social interactions involving VI individuals. We propose a research probe: “See Sing”, a smart glasses application that translates facial expressions, gestures, and environmental cues into emotionally contextual soundscapes, akin to real-time cinematic scoring. Our findings indicate that this melodic augmentation not only facilitates VI users' spontaneous interactions with strangers but also enhances their comprehension of emotional nuances in multi-person dialogues. However, the research also unveils ethical concerns, particularly regarding potential misunderstandings arising from AI biases in interpreting social subtleties.