Detecting ar tags:
Utilize the ar_track_alvar package to detect the frame of each ar tags respect to the camera frame
Color segmentation:
Subscribe to the image topic published by the webcam. Convert the image received to cv2 image in callback function and store it
Use guassian blur to decrease the effect of guassian white generated by normal cameras
Switch the image color space to hsv color space
Find correct hsv color mask by sweeping ranges through hue, saturation, value individually and record the ranges that detects these colors. Reasonable color mask should generate similar picture on the left above
Find centers for each color contours, as illustrated by the right picture above
Find location of each food ingredients represented by each color, and align those location with the ar tags location we calculated in coordinate transform to ensure accuracy
Two coordinate systems: Sawyer system (frame a - b) and Camera system (frame c - f)
Combine the two coordinate systems using an AR tag that attached to the Sawyer gripper and manually calculate the transformation matrix between the two.
Transformation Matrices:
tf_echo() to get translation and quaternion
tf_trans() to convert the tf_echo() result to a 4x4 g matrix
Calculate the transformation between the camera and Sawyer base (forward kinematics):
g_ad = g_ab @ g_bf @ g_fd
coordiante_change() to transform camera frame points in preparation and ingredient area to the Sawyer frame points
p_a = g_ad @ p_d
Frames:
a: Sawyer base
b: Sawyer gripper
c: Preparation area (AR tag)
d: Camera base
e: Ingredient area (AR tags)
f: Sawyer gripper AR tag
For motion planning, we decided to use the MoveIt! Package. We recognized that not all of our motions would be planar, so we could not use manual joint angle control.
Of course, MoveIt! has issues with optimality and consistency when planning between two points. We had three main solutions, which worked often and generally kept an orientation constraint without explicit implementation of the orientation constraint:
Precise box/environment constraints
We found that in general, obstacle constraints increased the optimality of planned motions in general, so we decided to make many precise obstacle constraints. These constraints are pictured to the left.
Min-cost over iterations
Another approach we found that worked was finding multiple paths in multiple iterations (in our case, 20) and finding the minimum cost of those paths. In our case, we chose the cost to be the path that had the fewest points. Ideally, the path would be of lowest length in Euclidean space, but this would have likely required heavy math (or at least a lot of forward kinematics). Considering computation time for this method is high enough, we decided to use amount of points in the path as the cost function. This has constant (although possibly high) computation time, but it still can produce inconsistent results. For our uses, this was sufficient
Thresholding
We used previous path planning data to validate the feasibility of a given robot task plan. In this case, we are using a threshold value based on the ratio of the Euclidean distance to points on the planned path to determine if a given plan is reasonable. This may be useful for identifying and addressing potential issues with the plan, such as obstacles or other constraints that the robot needs to navigate around. As such, it may be necessary to adjust the threshold value or use other methods to ensure that the plan is feasible and will allow the robot to successfully complete the task.
In conclusion, these various approaches could be used to generate a feasible and efficient path for the robot to follow, depending on the specific requirements of the task and the characteristics of the environment. Thus, optimizing the path to minimize the distance traveled or the time required to complete the task.