Post #11

SavedPosesUI.mp4

Post video update:

The web UI above displays the position of different body parts of the robot. Given that each body part has some limitations on the range of values it can take, the UI allows the user to set the robot's body parts to specific values within the valid range. There is a Save Pose option to save the current pose (which is the same as the positions of the robot's body parts being displayed in the UI) of the robot. Once Save Pose is clicked, a button for that pose will be created. As shown above, pose 1 and pose 2 were two different poses that were previously saved after clicking on the Save Pose button twice. The button for each pose then enables the user to go back to that previously saved pose.

The video above is a demo of what we could do with the UI we created.

The task we tried to achieve was to show that it is possible for the robot to grab the center of the walker under two different configurations, by using the UI we created to save a sequence of poses. In the following two videos, we included one video showing the demonstration process of making the robot grab the center of the walker, and another video showing the execution of that task. Under the two different configurations, the location of the walker was different, but we added a constraint that we assumed the relative position between the walker and the robot stayed the same. This was important because in the future, once we added the functionalities of perception, the robot should be able to detect the ArUco marker on the "wing" located at the center of the walker and maintain an optimal relative position from the walker. We used the UI to test the most optimal settings for the robot's poses to grab and pull the walker, given that the relative position between the robot and the walker stayed the same.

demo_robot.mov

Demonstration Process:

Configuration 1

The sequence of poses we tried to save using the UI is Lift Lower -> Arm Extend -> Gripper Open -> Gripper Close. Specifically, we needed to set the Lift position of the robot to be around the same height as the "wing" we attached to the center of the marker, extend the robot's arm to reach the "wing", open the robot's gripper and then close the robot's gripper in order to grab the "wing". In the demonstration process, we used the web UI to set different parts of the robot and once an optimal setting is found, the corresponding pose will be saved as a button. We could always come back to that pose by clicking on that button.

First, we tried to set the Lift position to be 0.3m but then noticed that it was too low to grab the "wing". We changed the Lift position to 0.35m, and then 0.38m which made the robot's arm to be at the same height as the "wing". The next step was to extend the robot's arm. We tried 0.13m for the arm's position which brought the gripper in front of the "wing". For the gripper values, a positive value will open the gripper whereas a negative value will close the gripper. We found that the most optimal values for opening the gripper was 5, and for closing the gripper was -2.

To summarize, here is a list of poses we tried, and we then saved the optimal settings for each part(Lift/Arm/Gripper) of the robot.

Set Lift 0.3m

Set Lift 0.35m

Set Lift 0.38m

Set Arm 0.13m

Set Gripper 5

Set Gripper -2


Configuration 2

The demonstration process of configuration 2 is very similar to that of configuration 1. We noticed that the commands we sent to the robot didn't always work perfectly. For example, if we tell the robot to extend its arm to 0.5m relative to the origin, it would usually move to a position that is within some threshold of 0.5m (e.g., 0.4899m). Therefore, using the saved sequence of poses from configuration 1 was close to achieving the task of grabbing the center but with small errors. This resulted in a slight difference between the sequence of poses for this configuration and the sequence of poses for configuration 1. We added a setting for the robot's wrist to be -0.15 so it could rotate its wrist closer to the "wing" located at the center of the walker. The rest of the settings roughly remained the same.

robot_execution.mov

Execution:

In the video above, we showed that in both configurations regardless of where the walker is, the robot can lower its arm, extend its arm and open its grabber and grab the "wing" located at the center of the walker.

Next Step:

We name the ArUco marker on the "wing" located at the center of the walker "walker _center". As seen above, we've added the option for the user to select a coordinate frame from the web UI. The coordinate frames include the robot's start pose relative to the world, and the walker_center frame. This is important for our project because knowing the TFs in different coordinate frames helps keep track of the relative positions between the walker and the robot. For example, the relative position between the walker and the robot presented in the videos above can be translated to a TF that represents where the robot base is relative to the walker_center in the walker_center frame. Our next step is to incorporate this TF into the walker detection as well as the navigation stack to bring the robot to an optimal position to grab the walker.