Algorithm

 

Strategy

Below is a diagram of our finite state machine used to complete the resource collection task. 

 This state machine was implemented in ROS 2 using YASMIN, which allows for service and action calls in any given state. Accordingly, we created services for getting desired base poses and getting the location of blocks, and action servers for moving the base, moving the arm using MoveIt, tilting the Locobot camera, opening/closing the gripper, and spinning the base. To achieve these, the main modules could be split into Grasping, Navigation, and Vision.


Grasping

The arm movement stack uses MoveIt to plan and execute an arm trajectory to the specified desired (x, y, z) coordinate and (roll, pitch, yaw) orientation of the end-effector. For each received request, the action server node creates a Move Group Interface, converts the desired (roll, pitch, yaw) orientation into a quaternion, then sends the full target pose to the interface.

Navigation

The navigation stack worked by having the MoveBase action server and a client interact to control robot movement and by having the AprilTag package localize the robot. For localization, the AprilTag package would return the position of the AprilTag with respect to the camera’s frame of reference.

Vision

The vision stack functioned as a service from the state machine to the vision client to the vision action server.  The state machine would request a service from the vision client for a color that the robot wanted such as a character ‘B’ for blue, the client would ask the vision action server to detect the colors, then the client return the first pose the desired color back to the state machine.