Our aim was to test Stretch's ability to roll the marker on a user's leg (the marker is in leiu of a ointment roller that the final evaluation will be on). We were also testing it's ability to navigate the lift to the desired application location on the user's leg.
We showed this video to Heer's CNA roommate and she brought up a few considerations. She mentioned that in the case of ointment application, the robot would need to know how much ointment to dispense. For now, we are implementing the minimum viable product, that will require users (or caretakers) to dispense the ointment onto the application roller. She also mentioned that the user may want to have different speeds of application and following commands, which we discuss in post 8. Another interesting point she was wondering about is if the robot can convey what it is trying to do, this will give us important design considerations for our user interface; ensuring that the user is not at unease about what Stretch is about to do, making sure that it's actions are predictable for user comfort.
Perception: The robot will need to detect the bottle of ointment to apply, the swab on which to put the ointment, and the location of where it should be applied on the human. In particular, the robot will need to be able to detect where to squeeze the bottle of ointment so it can squeeze it onto the swab. A high level of precision will be required for this and detecting the swab since the swab is going to be relatively small. The location of where it should be applied on the human will not need to be as precise, as we are not targeting sensitive areas such as the face, but rather larger areas on the arms or legs.
Manipulation: The robot will need to be able to navigate its environment and obstacles within it. We foresee both the lidar detector and the camera being critical tools in allowing Stretch to navigate to and from the medication and the user safely. It would be useful to have a predefined route or location of the medicine that is predictable for Stretch, as well as some way of telling Stretch where the user is, as well as their position if Stretch is trying to apply some medicine or ointment.
Navigation: The robot will need to navigate to the locations of wherever the necessary materials (i.e. bottle of ointment, swab) are located. It will need to be able to position itself such that it can squeeze the bottle of ointment onto the swab, bring it to the human, and gently apply the ointment to the human. It will need to navigate around the human's environment so that it can reach the appropriate place to apply the ointment on the human without bumping into things into the environment.
Interaction: Stretch will require some input from the user or a caretaker about what treatment is needed (medicine, ointments, bandages, etc.). We foresee the retrieval of these options to be fairly automated (although human oversight is definitely recommended) while the actual application of these treatments to be somewhat automated, but with lots of human input to ensure user safety. For instance, if stretch were applying an ointment, it may retrieve the swab and ointment, place the gripper near the application sight, and wait for the user to confirm that the target area is accurate and the ointment is correct. The actual application will be slow and gentle in order to give the user enough time to halt the process if needed, or take over for more accuracy. As a comparison, it is a similar level of input of say, adaptive cruise control, a lot of the process is automated, but human oversight and direction is still needed.
Environment: To make the problem easier, we will modify the environment so the necessary materials are right next to the human, so that the robot doesn't have to navigate too much to find these items. Additionally, we will use an ointment bottle that is easy for the robot to squeeze. We will also ensure other materials such as swabs and, if applicable, other ointments/bottles are well organized (will start with using markers), labeled, and easily accessible to stretch in order to ensure smooth and accurate object retrieval. Additionally, we will assume for now the user is positioned lying down on a bedside or surface, and that the area of treatment is easily accessible (say, an arm on the side of the bed that the stretch will navigate to). At first, we will also assume that the ointment is pre-applied to swabs that stretch can use for simplicity and proof of concept.
This week, we worked on tele-operating our Stretch robot to test how precisely and gently it could move its arm and gripper. We used a hand sanitizer bottle as a stand-in for an ointment bottle that might be used on a wound. The goal was to see if Stretch could handle objects carefully enough for sensitive tasks like wound care. This also helped us gain an idea of where humans were needed in the control loop -- and where autonomous capabilities could take over.
We set up perception on Stretch using ArUco markers. We attached a marker to the sanitizer bottle and placed it naturally on a table near the robot. Stretch was able to detect the marker, figure out where the bottle was in space, and use that info to move more accurately during teleoperation.
This was a big step toward helping Stretch interact safely with real medical supplies in the future!