Post #8
Technical Capabilities & Environmental Modifications:
Perception: What objects, people, or landmarks does the robot need to be able to detect? At what level of precision and temporal continuity do they need to be detected?
The walker
General location detection
Must be able to detect the walker in the room (and differentiate it from other equipment, i.e. a wheelchair)
Detection just needs to be precise enough to navigate the robot to the walker.
While the walker will likely not be moving, Stretch should continue to detect the location of the walker as it steers toward it.
Contact point detection
Once Stretch is close to the walker, it needs to detect the front of the walker. We intend to add an ArUco marker to the front of the walker.
Stretch then needs to detect the "wings" (contact points) used for pushing/pulling the walker with enough accuracy to grab one with the gripper.
Once Stretch has a hold of the walker, the contact points no longer need to be detected but the ArUco marker needs to be monitored to detect if the walker is being steered correctly.
The PD patient
Must be able to detect the patient with enough accuracy to steer toward them.
Need to continue to update the location of the patient as they may move.
Obstacles
Must be able to detect common household obstacles, walls, cabinets/counters, chairs, tables, misc furniture.
Must continue to update the location of obstacles as the robot moves toward the patient.
Also needs to detect other people
Manipulation: What objects or landmarks will the robot need to physically interact with?
Stretch only needs to physically interact with the walker; It needs to pull, push, and steer it around the room. We intend to add contact points to the walker that allows Stretch to easily interact with the walker.
Navigation: What navigational capabilities does the robot need? Does it need to be able to move to arbitrary points on a map? Does it need to precisely position itself relative to landmarks in the environment?
Stretch needs to be able to navigate to the walker and then back to the patient. It needs to precisely position itself in front of the walker when grabbing onto it. It also needs to precisely position the walker in front of the patient.
Interaction: What state information about the user does the system need to know? What explicit input or commands does the user need to communicate to the robot? What information should the robot communicate back to the user? Which of the above autonomous capabilities might need human monitoring (e.g., to stop the robot immediately if something goes wrong) or human help/control (e.g., take over driving the robot if it gets lost)?
Stretch should communicate to the user what stage it is in (i.e. finding the walker, moving the walker, letting go of the walker). The user just needs to communicate to stretch when they want the walker and when they have a secure hold of the walker. The eventual goal is for the robot to not need monitoring, outside of maybe monitoring the location of the robot in relation to the patient, the entire autonomous routine should just be able to be completed without monitoring. During the prototyping phase, Stretch will need to be monitored while finding the walker and bringing it to the user.
Environment: How do you need to modify the robot's environment to make the above autonomous capabilities possible?
The room stretch is operating in needs to be free and clear of small obstacles on the floor. Stretch needs enough room to rotate the walker and pull it between furnitures.