In general, our solution met our design criteria. Our margin of error was more similar to 3" than 2", but our distance of throwing outside of the reachable workspace was met, our safety speed requirements were met, and runtime requirements were met. Our robot was able to play a game of Beer Pong, detecting multiple cups in it's current configuration.
GRIPPER - We struggled with using the robotic gripper. It was difficult to predict the behavior of it – we often encountered an issue where the gripper jams and opens too slowly. This wasn’t due to the 3D-printed parts as it persisted after removing them, and it persisted even after the internal mechanism of the gripper was lubricated. This was the major source of error in the entire project, contributing nearly all of the 2” of margin of error we had. We were unable to characterize the behavior well. However, we could reproduce the jamming when sending an “close-open-open” command sequence immediately. However, as we never send that in our code, we were unable to debug it further.
VISION - We found that Hough circle detection was satisfactory for our project once tuned, but we believe that the consistency of detecting the cups could be improved. A combination of the low-resolution of the right_hand_camera, low contrast, and shadows casted by the lights caused a significant gradient in an image of a cup, as seen on the right. Our chosen circle detection algorithm would occasionally be unable to detect the cups as a result.
A priority-descending list of improvements with their rationale is below.
Characterize gripper behavior and friction
Allows for significant consistency boost, improving precision.
Install lights in environment/on arm
Can improve vision consistency
Choose better cup selection algorithm, like median of cup locations
Improves accuracy
Use external color webcam
More robust vision algorithms
Use MoveIt to execute trajectories
Allow for better tracking and potential gripper release improvements
Split code into nodes
Allows for better development