First International Workshop on
Artificial Intelligence for Accessible XR Experiences and Spatial Systems
(AI-AXESS)
First International Workshop on
Artificial Intelligence for Accessible XR Experiences and Spatial Systems
(AI-AXESS)
How can artificial intelligence (AI) transform extended reality (XR) into a universally accessible and adaptive medium?
Despite advancements in XR technologies, many systems still fail to accommodate users with diverse needs, such as those with visual, motor, or cognitive impairments. A key challenge lies in enabling XR systems to dynamically understand and respond to both physical and virtual spaces in real time. By integrating AI-driven techniques such as machine learning algorithms, computer vision, semantic mapping, intelligent localization systems, and adaptive interfaces, XR can bridge these gaps, creating immersive experiences that are inclusive, intuitive, and context-aware. This workshop will bring together experts in AI, XR, and inclusive design to explore cutting-edge solutions that empower all users to fully engage with immersive technologies.
In the last decade, AI has emerged as a transformative force across XR applications. Techniques evolving from earlier computer vision algorithms to current neural networks and large language models are expanding the possibilities for accessibility in immersive environments. Key advancements include semantic scene understanding, multimodal interaction processing, and adaptive content generation, complemented by specialized models for spatial mapping, gesture recognition, and gaze tracking. Edge AI solutions now enable on-device processing with reduced latency, while federated learning approaches address privacy concerns through decentralized training. These technologies aim to make XR interactions more intuitive and adaptive for diverse users. For example, spatial understanding through object recognition holds potential for assisting navigation for users with visual impairments, while natural language processing enables flexible interaction methods that reduce reliance on complex physical controls. Similarly, neural networks for adaptive rendering can modify visual elements like contrast or highlighting to improve clarity for users with low vision.
Nonetheless, integrating AI into XR is not without its challenges. Current systems often struggle with balancing real-time performance demands against hardware limitations, particularly in applications requiring precise localization or adaptive rendering. For example, latency issues can disrupt seamless navigation for visually impaired users, while insufficient computational resources may limit the effectiveness of real-time scene interpretation for cognitive or sensory accessibility. Furthermore, a lack of diverse training datasets often leads to biases in AI models, reducing their reliability for underrepresented user groups, such as those with atypical speech patterns or complex motor disabilities. Additionally, ethical considerations around user privacy and data security are of crucial importance, as accessibility features may rely on sensitive inputs like gaze or voice. Robust frameworks are needed to ensure responsible data handling. Finally, the rapid evolution of both AI and XR technologies adds another layer of complexity, requiring accessibility features to be adaptable and forward-thinking.
We encourage submissions presenting AI-driven or AI-enabled approaches, including early-stage work, initial analyses, experimental techniques, or position papers summarizing relevant AI-based methods and experiences. Papers should be 2 to 8 pages. Topics include but are not limited to:
AI-driven localization and semantic mapping for accessible navigation
LLM integration enabling natural language interfaces and contextual assistance
Multimodal interaction systems (voice, gesture, gaze, haptics) for inclusive control
Adaptive rendering and content personalization addressing diverse user needs
AI-assisted 3D content creation and editing supporting accessibility
Ethical frameworks and privacy-preserving AI methods for XR applications
Cross-platform standards and interoperability for accessible XR ecosystems
User-centered design and evaluation methodologies targeting inclusive XR
Applications of AI within XR for education, healthcare, and social inclusion
Benchmarking and validation of AI-powered accessibility features
AI-supported Cognitive load management and simplified interaction paradigms
Important dates
Submission Deadline:
November 3rd, 2025 (AoE)
Notification:
Early December 2025
Camera Ready:
Mid December 2025
Workshop Date:
26, 27, or 28. Jan. 2026 (TBD)
Venue
The AI-AXESS workshop is part of the IEEE AIxVR 2026 conference in Osaka, Japan.
For questions about the workshop, please get in touch Alexander Marquardt (Nara Insitute of Science and Technology, Japan).