NIH R01EY030470: Automated Orientation & Mobility Training in Virtual Reality for Low Vision Rehabilitation
Project Summary
Irreversible impairment of vision (low vision) negatively impacts the patient’s ability to conduct activities of daily living. One of the most important activities that are affected is mobility, the ability to move independently, safely, and efficiently in one’s environment. Orientation and Mobility (O&M) rehabilitation is the only proven treatment that restores mobility lost to low vision. O&M rehabilitation is conducted by Certified O&M Specialists (COMS), who teach navigation skills to low vision travelers in real streets and guide them to practice these skills until they achieve their mobility goals. Due to the potential danger of practicing navigation skills in real streets, a low vision traveler has to be accompanied by a COMS throughout O&M skill training, which may take many hours. While the current O&M rehabilitation is effective, it is not accessible or affordable to many individuals with low vision who can benefit from it. This is because there is only a small number of COMS who tend to cluster in large cities because low vision individuals have limited mobility to reach O&M specialists because low vision individuals tend to have low income or are unemployed and thus cannot afford the cost of individual lessons from COMS, which is not reimbursable by insurance.
Our solution to the accessibility and affordability problems to O&M rehabilitation is a Virtual Reality-based Intelligent O&M Specialists (VR-IOMS), a computer program that can conduct quality O&M skill training automatically in safe virtual streets. If successfully developed and validated, low vision individuals can conduct self-regulated O&M skill learning from the VR-IOMS in safe virtual environments, in their convenient locations and times, and with minimal cost. The objectives of this research project are to develop VR-IOMSs that can teach three sets of skills for three O&M tasks, to implement them on virtual reality simulators, and to compare the training effectiveness of the VR-IOMSs with the training effectiveness of human COMS in a clinical training trial. The three VR-IOMSs are specialized in teaching skills for three O&M tasks, the timing to cross a signalized street (TCSS), the timing to cross an uncontrolled street (TCUS), and learning the outdoor numbering system (LONS). The three VR-IOMSs will be developed, implemented in sequence. The training trials of these VR-IOMSs commence after the implementation of the VR-IOMSs.
Personnel
Project Investigators:
Lei Liu, University of Alabama Birmingham
Jacob Chakareski, New Jersey Institute of Technology
Developers & Graduate Students:
Chaitanya Ghadling, New Jersey Institute of Technology (former)
Jayden Kim, New Jersey Institute of Technology (former)
Vivek Senapati, New Jersey Institute of Technology (former)
Kantida Nanon, New Jersey Institute of Technology (former)
Kyuung Cha (Casey), New Jersey Institute of Technology (former)
Post Doctorate:
Xiaoyan Zhou, New Jersey Institute of Technology
Trainers and Certified Orientation and Mobility Specialists (COMS):
Jack Harrison, Alabama Institute for Deaf and Blind
Amber James, Alabama Institute for Deaf and Blind
O&M Tasks:
1. Timing to Cross a Signalized Street
2. Timing to Cross an Uncontrolled Street
3. Joystick Controller Training System
4. Learning the Outdoor Numbering System
1. Timing to cross a signalized street:
3. Joystick Controller Training System:
Training Program Overview: Patients are introduced to the joystick controller's control elements and their functions. A hands-on tutorial demonstrates how to move around, look around, and interact with the environment using the joystick.
Exploration Stages: After mastering the controls, patients participate in exploration stages set in a virtual environment. Each stage begins with a consistent scenario:
Scenario: An oval-shaped path with objects positioned along the right side.
Objective: Move from the starting point to a designated destination.
Interactive Challenges: Along the path, patients encounter hotspots where they must complete mini-games before proceeding:
Mini-Game Tasks:
Solve a small math equation.
Identify a specific shape among many based on a requested color.
Determine the color of a requested shape.
Upon reaching the destination, patients answer a question related to patterns observed in their mini-game answers.
Support Features:
To assist patients in completing the exploration stages, two helper functionalities are available:
Avatar Guidance: A virtual avatar moves along the optimal path for the patient to follow, providing a visual guide to success.
Audio Feedback: Audio prompts and corrections inform the patient of mistakes and guide them on the next steps.
By the end of the training program, patients will have a comprehensive understanding of the joystick controller and be well-prepared to explore and interact with VR-IOOMS virtual environments confidently.
Overall structure of the training
Tutorial & Quests
Equation mini-game task
Find shape or color of the text mentioned
Stage selection menu
Control elements and functionalities
4. Learning the outdoor numbering system:
Training Program Overview: The training program is designed to help patients with low vision navigate unfamiliar neighborhoods by understanding and interpreting street address numbering systems. This training equips participants with the skills needed to orient themselves and plan routes effectively in real-world environments.
Objectives:
Teach patients to identify and read street addresses across various address arrangements.
Help patients understand the numbering patterns (e.g., even numbers on one side, odd numbers on the other) in typical neighborhoods.
Prepare patients to navigate different types of communities and locate specific addresses.
Scenario:
Patients practice in a uniform community where:
Even-numbered addresses are on one side of the street, and odd-numbered addresses are on the other.
The task is to move sequentially to four different houses and read their addresses.
Exploration Stages:
Patients will be trained across three distinct community types, each with unique address placements:
Mailbox Community: Addresses are located on mailboxes outside the houses.
Plank Community: Addresses are displayed on planks or signboards near the entrance.
Door Community: Addresses are fixed on or above the front door of the house.
Final Evaluation:
A mixed community combines elements from the three types, with addresses randomly placed on houses.
Interactive Challenges:
Locate the position of the street address on houses in each community type.
Understand and interpret the address arrangement and numbering patterns.
In the final stage, identify the location of a house with a random address based on the observed patterns.
Support Features: To assist patients in mastering the task we have:
Audio Feedback:
Provides guidance when mistakes are made.
Informs patients where to look for addresses in different communities.
Zoom-In function: Patient can use a telescope like object to zoom-in to read the address if they have trouble viewing the address.
By completing this program, patients will develop a thorough understanding of how to locate and interpret street addresses in various community types, enabling them to confidently navigate unfamiliar neighborhoods and travel to specific locations.
Exploration Flowchart
Community Flow
Mailbox community
Plank community
Door community
Additional Resources
The links attached below are the video demonstration.
Pedestrian lights demo: https://drive.google.com/file/d/17BhnyNmWF_6CrcEnAlADyTRfcNo6pzl3/view?usp=drivesdk
Drop-down lists: https://drive.google.com/file/d/17BBjAcNXBMm3WrgrwNjhrXFkOZs9T2JI/view?usp=drivesdk
Pointer functionality: https://drive.google.com/file/d/19X-2RoNS7aK25Nxw06XNp9zNfcesc0tw/view?usp=drivesdk
Link to demo executables: https://drive.google.com/folderview?id=16I65OizC5Kj8II8WzY8G5s1mWyzx5xVW
Performing an experiment: https://youtu.be/mgYo_svKXdg
Performing an experiment with cataract goggles: https://youtu.be/qCsLYCjI60k
Panning video of the setup: https://youtu.be/ToOQLfTMBdM
Participant in training: https://youtu.be/YKjCMrKp1gY
Participant is interacting with the Road and Pedestrian Safety Simulation in the training session. This phase is performed before every evaluation task to provide participants with the basic knowledge and traffic safety rules.
Evaluation phase of Road and Pedestrian Safety Simulation with a participant wearing cataract vision goggles. The goggles demonstrate the effect of glare on visual function, they also offer a general understanding of congenital impairments.
Participant is provided with auditory and text based information. Objects in the scene are highlighted as they are providing the participant with a better and more clear understanding of the concepts in the virtual traffic environment.
The simulation is performed on three displays in a surround setup with stereo speakers to provide a comfortable and an immersive experience the participant can learn from and perform evaluations in the Road and Pedestrian Safety Simulation.