Post #17
Team Progress Updates, Week 8 to 10
The gripper_pose_stack branch in this Github repo contains the latest implementation for our project.
The new_backend branch contains our initial approach of using FUNMAP to move the gripper for grabbing the walker.
And the detect_person branch contains the implementation for detecting a person.
UI Updates
As of right now the UI is set up for pose training and executing the individual tasks associated with retrieving the walker.
Currently, the UI is capable of fully controlling the robot.
Members: James & Markus
![](https://www.google.com/images/icons/product/drive-32.png)
![](https://www.google.com/images/icons/product/drive-32.png)
Base Navigation Update
Being able to navigate to the walker and precisely retrieve the walker is important. We observed that it'll be easier for Stretch to grab the walker if Stretch's base stays a few inches away on the side of the walker and faces forward relative to the walker (see the second demo video).
Built a map of the capstone lab using the stretch navigation stack
Then we broke the base navigation into two steps
Step 1: The camera angle is set to be pointing at the center of the Aruco marker. Locate the walker using the Aruco marker and navigate itself closer to the walker. This is to ensure that Stretch can get to the optimal pose more precisely in step 2.
Step 2: Before step 2 executes, the camera angle is set to be pointing at the center of the Aruco marker. Stretch will have to look at the walker from a side angle. In this step, Stretch navigates itself to the optimal pose for grabbing the walker.
This was implemented by taking advantage of Stretch's navigation stack. We saved poses for both step 1 and step 2 in the walker's frame. When Stretch tries to navigate itself, the backend will transform the saved poses into the map frame, and publish the pose to the topic on which Stretch navigation stack listens.
Members: Markus & Sylvia
Walker Grasp Approach 1 (FUNMAP)
![](https://www.google.com/images/icons/product/drive-32.png)
![](https://www.google.com/images/icons/product/drive-32.png)
Saved a gripper pose which defines a 3D point in a 3D map for where to grab the walker
Wrote code to publish the 3D point to FUNMAP's navigate to point callback function
FUNMAP was unable to precisely grab the walker since it could only move the gripper within a certain radius from the grasp point, and it didn't care about how it reached the goal. FUNMAP also completely reset the position and the orientation of the base, which defeated the purpose of doing base navigation mentioned above.
Thus, we tried an alternative approach, which will be described in the following.
Members:
Markus & Sylvia & Francesca & James
Walker Grasp Approach 2 (Manual Adjustments)
![](https://www.google.com/images/icons/product/drive-32.png)
Instead of using FUNMAP to move the gripper to the target pose, we used fine-tuned manual adjustments, which include
either rotating the robot's base left or right to change the yaw value so that the robot directly faces the walker
either extending or retracting the robot's arm to change the y value of the current gripper's pose to align with the target pose's y value
leveling the robot's lift to change the z value of the current gripper's pose to align with the target pose's z value
either moving the robot's base closer or further away from the walker so that the current gripper pose matches the target pose (the x value) and the gripper is close enough to grab the walker
The gripper will initially rotate the wrist yaw to face the walker and open the gripper. After the final fine-tuned step (last bullet point above), the gripper will close to grab the walker.
Members: Markus & Sylvia
![](https://www.google.com/images/icons/product/drive-32.png)
![](https://www.google.com/images/icons/product/drive-32.png)
Detect Person (stretch goal)
Created a new node that spins the camera looking for people's faces
Added UI element to send messages to initialize the node
The node calculates the transformed x, y position of the mouth in the map frame and the amount of distance the robot needs to travel from the start of the gripper and the target location.
The first video describes how the people detection node works and the type of messages published
The second video is shot on week 10, showing the robot moving to the detected person's location.
Members: Francesca
Walker Update
The walker setup has changed slightly,
The main Aruco marker is now in the center of the walker, above the steering point, to allow for better visibility when the robot is at extreme angles from the front of the walker.
The Steering point now has its own Aruco marker for easier detection of it's location.
Members: James
![](https://www.google.com/images/icons/product/drive-32.png)
Project Proposal
Getting caught up on our project proposal!
Members: the whole team :)