The first step in this project was coming up with a collaborative strategy and state diagram for the motion of the robot. We decided to start with a naïve approach in which the robot scans the entire scene each time to determine the location of the blocks, and based on the known final configuration of the blocks the expected value of the station numbers could be determined. The robot would also pick and place the block closest that could fulfill the station requirements based on this guess. The state diagram of this strategy is given to the right.
The architecture for our implementation of manipulation is broken down into three main components: Perception, Base Motion Control, and Arm Motion Control. The perception program implements HSV Color Thresholding and also uses the color and depth data to locate the desired block. The motion control for the Base uses a pose stabilization controller which combines the lookahead tracking control from HW 1 with a final alignment step. The base motion controller also subscribes to the motion capture publisher to obtain the absolute robot location in the field. Lastly, the arm motion controller incorporates commands from the interbotix xs toolbox to program both the arm and gripper to work concurrently to pick and place the selected block.
For the perception aspect of the code there were two main functions: cube detection and cube location determination. The cubes were detected by creating masks for the HSV color thresholds of each block on the color data from the realsense camera of the locobot. We then used the OpenCV findContours() function to detect the blobs of each cube and find the center point of each. An example of the color masking is shown below. The second aspect of the perception code was determining the distance to the desired block to pick up. For the collaborative strategy, this would be the point where we would include an algorithm to select for the most optimal block, however for simplicity, we started by simply selecting for the closest block in our field of view. To do this, used the depth camera data to determine the depth from the robot at each of the cube centers (calculated in the previous part). We then calculated the 3d vector to the nearest cube and performed a coordinate transformation from the robot frame to the pointcloud frame to get the cartesion coordinates of the block. We published this location and placed a Marker to be used in later parts. The cube detection and Marker placement can be seen below.
Initial color masking in OpenCV
Block detection in Moveit and marker placed on nearest block
The code for the perception implementation can be found here.
Once the desired cube location was determined, we used pose stabilization to drive within a foot of the desired blocks. The pose controller uses nonholonomic kinematics and proportional gain controllers to drive the robot to the desired target with minimal gain. This code was based on homework 1. After driving to the desired location, the robot also orients to a specified direction to face the blocks. A demo of the drive can be found on the Results page.
The code for the base motion control can be found here.
The finally step of our implementation was picking up the desired block and placing it in the selected station. In order to move the robot arm and and gripper we used the interbotix package InterbotixLocobotXS. We first subscribed to the cube Marker and transformed the location into the arm frame. Then using the set_ee_pose_components function, we moved the robot arm to the cube location. The gripper was opened and closed using gripper.open()and gripper.close(). We successfully were able to pick up the nearest block, however we did not reach the point where we determined a station location. Instead we just dropped the block further out. A demo of the robot picking up a block can be found on the Results page.
The code for the perception implementation can be found here.
Code used in this project can be accessed using the link below:
Master launch file: roslaunch interbotix_xslocobot_control xslocobot_python.launch use_camera:=true use_base:=true robot_model:=locobot_wx250s align_depth:=true