This week is the final week for the project. We would like to show the video demo of our completed project.
Through out this project we created and tested a human-robot interaction system that employs audio navigation to help blind users navigate in a new environment. The approach to the navigation interaction is using a waypoint system, where the robot leads the user to a waypoint along the path, then the user follows the robot to the waypoint before continuing navigation.
Overall, our approach offers a promising solution for helping blind individuals navigate indoor environments with greater independence and confidence. By using robots as waypoints, we can provide assistance without drawing unnecessary attention to the user and allow them to move freely and comfortably throughout their environment.
The environment for the VR simulation was created in Unity, a game engine capable of creating VR games and applications. The setup involves a virtual corridor setup with two destinations for navigation. The use can choose to navigate to any of the two destinations by selecting on the tablet in VR.
A virtual character was used to represent the navigation robot. To simulate the audio navigation, the robot emitted sounds from its location. To the user, the sound was played through the stereo speakers attached to the VR headset. This meant that depending on the robot’s location relative to the user within the virtual environment, the user could hear the robot’s sound from the direction of the robot. For example, if the robot was to the front right of the user, then the user will hear the sound as if it was coming from the front right of the user through the headset. This allowed the user to locate the robot based on the direction that the audio was coming.
The user was able to control their movement within the virtual environment by physically moving and turning, or using a joystick as a method of locomotion.
This simulation environment allowed the researchers to mimic the experience of audio-based navigation within a virtual space. The use of Unity as a platform for development streamlined the testing and iteration process significantly. The directional audio experience was adequate for the testing as well.
There were many limitations with the test setup using the virtual reality system. Although the approach could be used to simulate the experience of audio-based navigation, it could not completely replicate the real-world experience of audio navigation for blind users. Firstly, using the joysticks for navigation in a virtual environment was a completely different experience from walking and following the robot. The sensation of walking and moving within a real space could not be reproduced by joystick navigation. However, we did not want our users to be walking around with VR headsets strapped to their heads either. Another limitation came in the level of fidelity of the audio experience. The stereo speakers on the headset were able to convey to the users an approximate direction of the sound. However, there were some problems we found in testing when the users are very close to the source of the sound, the audio direction seemed to be lost and the user could not determine which way the sound was coming from. This limitation might be due to Unity’s way of processing audio and the quality of the stereo speakers on the headset, which we were not able to address for this experiment.
For the user testing process, the participants were instructed to select the destination on the virtual tablet and hit start to begin the navigation. Then the screen on the VR headset faded to black, and the participants were no longer able to see the virtual environment. The robot then emitted audio signals and messages to help guide the participant to its location. The participant’s task was to navigate through the corridor by following the sounds emitted from the robot’s location. After the testing, the participants were asked a series of questions in a semi-open-ended interview regarding the experience and if there were any confusions in the interaction experience with the robot through audio sounds.
For real-world testing, the researchers set up a wizard-of-oz testing scenario with the participant. The test was conducted in the hallway of an academic building. Unfortunately, the researchers were not able to test with participants with visual disabilities, however, the participants were given blindfolds and a white cane to simulate the experience of blind participants. The robot was initially placed in front of the blindfolded participants before beginning the test.
One main limitation was that the researchers were not able to conduct testing with visually impaired or blind participants. This meant that the evaluation and feedback gathered from the testing were not a complete representation of the true users of the robot. Another limitation was that since the test was conducted with participants within the area, the participants may have seen or become familiar with the routes that the navigation was tested with. Therefore this may not be a completely accurate representation of the experience of a user entering a new unknown environment. However, the researchers believed that this was negated by the side effect of blindfolding visually capable participants.
The participants were instructed to follow the instructions provided through the audio from the robot until they reached their destination. Once the participants were blindfolded, the robot then begins to move to the first waypoint on the path. The participants were then given instructions to follow the robot to the waypoint with statements that contained both the direction and the number of steps needed to get there. For example “Please go forward for 10 steps”. Once the participants followed the direction and reached the waypoint where the robot was located, the robot then moves to the next waypoint and gives further instructions to the participants. If the participants reached a corner or a turn in the path, the robot informs the participant of the upcoming change in direction in the route. The task was completed when the participants were able to successfully reach the destination, or if the participants failed to reach the destination and resigned from the experiment. It is to be noted that all of the participants were able to successfully reach the destination.
User testing both in virtual reality and in the real world revealed results that suggested that audio navigation can help blind users navigate to the desired destination. The participants from the study successfully navigated to the desired destination at the end of the virtual corridor solely based on following the sound that is emitted from the virtual robot.
From conducting the tests in the real world, the researchers found that hearing sounds from the robot while blindfolded helped them localize themselves within the space of the corridor. Hearing the sounds and having a mental image of the robot’s location relative to oneself brings a sense of comfort within the environment. This is similar to the findings discussed in Sanchez 2014’s brain scan of the participants.
Initially, the researchers thought that emitting simple sound tones from the robot and letting users follow that sound would be enough to get the user to the desired destination. However, through testing, the researchers found that some participants were struggling with navigation. This was due to the participants being close to the robot and thinking that they have already reached the waypoint, when in fact, they were not close enough. The confusion for the user arises from the limitations of the accuracy of spatial sound within the VR environment but the issue should be addressed in the real world as well. When the user is near the robot, it can be difficult for them to determine the direction of the sound. This can lead to uncertainty about which way to go.
The researchers found that throughout the navigation journey with the robot, users can become confused if the instructions are not provided in a clear and concise manner. Initially, simple sounds and tones were used to indicate when the robot had reached a waypoint. However, it was necessary to explain the meaning of these tones to the users, as they frequently became confused when the instructions were not clear. More specifically, the direction changes and user feedback needs to be clearly communicated.
One source of confusion for users is when the robot reaches a waypoint at a corner and changes direction as it moves to the next waypoint. This sudden change in direction can be unexpected and disorienting for the user. One user suggested implementing a feature similar to the one used in Google Maps, where changes in the route are announced in advance. In response, the researchers included clear phrases, such as "turn left 90 degrees and walk 10 steps," to help users better understand the route they are traveling. This helped reduce confusion and improve the user experience.
During testing in a real-world environment, the researchers found that providing step-by-step instructions can result in errors. For example, the robot might instruct the user to walk 15 steps forward, but at the end of those 15 steps, the user may still not have reached the waypoint. In this case, the robot would need to provide additional instructions to help the user reach the waypoint. Another issue that was observed was that users sometimes nearly collided with the robot, even when following the instructions provided by the robot. To address this issue, the researchers will need to provide feedback and ensure that users can safely and accurately navigate to the desired waypoint during their journey.
When testing without user feedback in the virtual environment, one participant mentioned that they felt somewhat uneasy when walking blindly toward the robot, not knowing if they were going the right way or not.
The current waypoint format requires the user to wait for the robot to reach the next waypoint before beginning to walk, which can cause delays in the journey. In the future, the researchers plan to experiment with a different approach where the user follows the robot closely and relies on audio cues for navigation. This may help reduce delays and improve the overall experience.