Design Requirements
Based on the goal mentioned on the previous page, we derived the following requirements
Identify the location of each ingredient cup in its dextrous workspace.
Pick up ingredient cups from arbitrary initial positions and pour their ingredients into the product cup.
Utilize a scale to precisely know when to stop pouring the ingredient.
Computer Vision
Originally when designing the computer vision system, we attempted to utilize the YOLO v3 Convolutional Neural Network to perform object detection. The idea was to locate the target object in a 2D RGB image and then map that image to the point cloud generated by the Intel Realsense camera to get the coordinates of that object. We soon realized that to have a robust model that could identify all the objects in our scene, we would have to build our own labeled dataset to handle the unique objects in our scene and the lighting conditions of the lab. Due to the time constraints of the project, we realized that we did not have the time to take hundreds of photos and tweak a CNN. We then tried to implement a simple color thresholding algorithm with OpenCV to detect different colored cups. We soon realized that while we could identify where cups were using thresholding, it was incredibly difficult to find the center of the cups, especially as they changed angle relative to the camera. We finally transitioned to using AR tags placed on objects.
Using the size of the AR tag and the known resolution of the camera, we can determine the distance and orientation of an AR tag placed on an object relative to the camera. We used a total of four AR Tags. One AR tag on the Sawyer Robot's base, one on the "product cup", and the last two on the ingredient cups. The AR tags on the ingredient and product cups are so we can compute the location of the cups relative to the camera frame. The issue here is we need the coordinates in the robot "base" frame when trying to solve an inverse kinematics problem. To account for this, we place an AR tag on the base of the Sawyer robot so we can compute a transformation from coordinates in the camera frame to coordinates in the "base" frame of the robot. Since the robot computes it's base frame coordinates as inside the robot arm, we must also account for a constant offset between the AR tag placed on the base frame and the true base frame coordinate.
Gripper Module
Our grippers were custom designed to compensate for the tolerances of the sawyer robot. These pairs were 3D printed in PLA. The rounded effector allows for centering of the cup when being picked up to allow a successful pick up within roughly an inch of range. A slot was used for flexibility in mounting.
Our initial iteration was designed with the assumption of a horizontal grasp approach. However, when looking into doing so it brought to light that there would be much more obstacle avoidance with this method. It would need to take care to avoid other cups, the scale, and any other objects on the table not only with its end effector but the rest of the arm as well. Thinking this over we concluded a vertical grasp would be a simpler method to implement.
Our final iteration was designed to implement this vertical gasp approach. There were other improvements to the structural integrity in this iteration as well such as: ribbing, slot support, and a bulkier design. This gripper allowed us to grab cups, move them, and pour them effectively.
Path Planning and Kinematics
In order to move the end effector to where it was needed we used the Sawyer pre-programmed MoveIt Path Planner Class. These functions used inverse kinematics to determine the 7 joint angles needed to actuate the robot to a given point. The only issue is that the path taken is arbitrary, often not maintaining orientation and with excess motion. To work around this, we cycled through 50 paths and chose the path with the least number of waypoints and time spent. This optimal path chosen had minimal excess motion and maintained orientation. This was crucial to prevent spilling the cup.
This allowed us to pick and place cups as well as position them to prepare a pour.
In order to pour the ingredients into the product cup, we used Forward Kinematics to manipulate the j5 joint to alter the pitch. This is able to be done after calibrating the offset position from the product cup needed to simply tilt and drop ingredients into the product cup. This was done in a jittering motion that slowly increased in angle until the desired ingredients were poured.
Scale Module
A scale was used to measure the amount poured into the product cup. A strain gauge was used to convert minute deflections to changes in resistance which was then processed and calibrated on an Arduino. This Arduino then published the data to our client and issued commands to tell our system when it was time to stop pouring after meeting the desired weight.
Design Evaluation
Design Robustness:
The use of the Sawyer allows reprogramming and improvement for future designs as code in written in easy to modify modules.
The use of 3D printed grippers is a very cheap and quick to redesign should a new idea arise or a gripper breaks.
Our system struggles to handle ingredients of different viscosities without changing the scale threshold or jitter values manually. In the real world.
When the cups are turned away from the camera and the camera can no longer see the AR tags, the system does not function. The only real solution to this is completely replacing the AR tag computer vision system.
Design Durability:
The durability of our design is our biggest area of improvement as the tolerance of the Sawyer end-effector does lead to errors occasionally. This led to a fracturing of a handful of printed grippers, which was improved with better reinforced gripper designs.
Design Efficiency:
Our path planning cup moving algorithm cycles through 50 paths to choose the path of lease excess motion leading to error. This unfortunately is computational heavy compared to computing 1 path.