CNIT-AST: Research Project
Team: Peter, Jak
Instructor: Dr. Min
Indoor Navigation Behaviors [2]
One study investigated indoor navigation behaviors used by visually impaired people in order to understand what methods and strategies are employed to navigate unfamiliar environments. One of the main findings is that visually impaired individuals employed the help of sighted guides to help navigate unfamiliar spaces. The difficulties of navigating in unfamiliar environments stem mostly from the number of obstacles and population within the given space.
Participatory Design for Robot Guides [3]
This participatory design study explored designs for using robots as a navigation assist technology for the visually impaired [2]. The participatory design session involved both visually impaired designers and non-designers to generate design criteria for using robots. The main insights are as follows: The robot will need to know the layout of the building and be capable of navigating through different types of indoor terrains such as up and down stairs, elevators, and doors. The robot should not make assumptions about the user’s needs, for example, the robot should not assume that all users need assistance and respect the users. The robot should avoid attracting extra attention to the users.
Robotics Technologies on Blind Assistance [1]
This article 'a best ICRA paper from UCB' outlines the use of using a robot dog as a guide dog replacement for the visually impaired [3]. The platform was developed on top of the Mini-Cheetah Robot by Boston Dynamics and is able to use its depth-sensing cameras to navigate through the environment. The map data is first fed to the robot dog, then the user can hold on to a leash on the robot dog and be guided to their desired destination. They utilized various sensors such as a force sensor in the leash, lidar and camera to support robotic perception and human-robot cooperation.
Learning-based Algorithms on Blind Assistance [5]
This work 'an ICRA paper from Berlin Institute of Technology' proposed a DRL-based assistance agent for human assistance in crowded environments. They semantic agent, which refers to a socially aware navigation algorithm 'SARL', is not only able to navigate safely in highly dynamic environments, but also able to follow or guide a human while adapting its behavior accordingly. Compared with other works, learning-based approaches can understand the relationship among robots and humans, so the framework has a great potential to develop blind assistance in the crowd-rich environment.
[1] A. Xiao, W. Tong, L. Yang, J. Zeng, Z. Li, and K. Sreenath, “Robotic guide dog: Leading a human with leash-guided hybrid physical interaction,” 2021. [Online]. Available: https://arxiv.org/abs/2103.14300
[2] S. Azenkot, C. Feng, and M. Cakmak, “Enabling building service robots to guide blind people a participatory design approach,” in 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2016, pp. 3–10.
[3] W. Jeamwatthanachai, M. Wald, and G. Wills, “Indoor navigation by blind people: Behaviors and challenges in unfamiliar spaces and buildings,” British Journal of Visual Impairment, vol. 37, no. 2, pp. 140–153, 2019. [Online]. Available: https://doi.org/10.1177/0264619619833723