The purpose of this project is to use Moveit2 to generate a basic Pick & Place task with a UR3e robotic arm. The task involves picking an object from the table and placing it in a different location. The Move Group Interface API is utilized to combine a series of motions to generate the pick & place sequence. Additionally, a perception node is integrated into the sequence to enable the robot to detect the position of the object to be picked.
Moveit2.
Perception.
First the moveit assistant was used to setup the robotic arm, set joint limits, max velocities and so on... Then a cpp node was created to crontrol the arm through the moveit API.
Joints configuration - joints.yaml
First of all, the joint_limits.yaml file needs to be update, as you can see, all the gripper joints have a max_velocity parameter set to 100. Moveit doesn't like this value, as it is expecting a float type. Therefore, we will need to modify this value to 100.0.
Controller configuration - moveit_controllers.yaml
After running ros2 action list, we can see that the current Action Server that the robot runs to the one we got from your configuration are different. This means that when the MoveIt package tries to connect to the simulated robot to control it, it won't be able to do so since the Action names won't match.
In manipulation, when you want to pick an object from the environment, you don't send the end-effector directly to the pose of the object, as the gripper might end up colliding with it. Instead, you send the end-effector to a pose near the object. Then, you execute the approach motion in order to get close enoguh to the object to pick it. Once you have picked the object, you execute the retreat motion to go back to the previous position.
Full motion.
Approach position.
Retreat postion.
Destination.
To detect the coordinates of the cube a depth camera was used. When using a depth camera it is posible to obtain a pointcloud and then process the pointcloud using the PCL library to detect shapes.
Two tests were done, in the first one the position of the cube was hard-coded in the code. Meanwhile in the second one the coordinates of the cube were obtained by using the depth camera and the perception node.
Pick up & Place.
Pick up & Place + Perception.