post #13
Our project involves grabbing the walker and bringing it to the Parkinson's patient. Therefore, we want to demonstrate the robot's basic ability to move from its current location to an ideal pose in front of the walker with the navigation stack.
![](https://www.google.com/images/icons/product/drive-32.png)
For our Minimum Viable Product, we plan to:
Run ArUco marker detection to get a 3D point of the marker on the walker relative to the robot's base.
Apply tf transformation to convert the 3D point of the marker to the global frame, and then convert the transform to a 2D pose on the map.
Navigate to the defined 2D pose relative to Stretch's base so that it's at an optimal position to grab the walker.
Use FUNMAP to move stretch's gripper to grab the walker. The target grasp point should be a 3D point.
Repeat steps 1-3 for the ArUco tag on the person
Release the claw and the walker should now be within reach of the patient
For our stretch goals, we would first try using the ArUco tag on the patient. If we have time, we will experiment with using nearest mouth detection to detect the 3D location of the patient's mouth.Â