Despite many challenges and long nights along the way, our project was largely successful in the end. We were able to play a half game of shuffleboard against the robot during our showcase, only cut short by the allotted time for our presentation. The robot was able to go through all the steps of performing homography on the board, detecting the pucks, calculating a desired shot trajectory, and actuating a shot. The video below shows a full game recorded after the showcase.
From an outside perspective, all of the parts work, however there were still a few problems. First, the automatic board detection stopped working minutes before our presentation, likely because we had to change the camera location and account for the whiteboard in the background. As a result, we had to manually select the corners during the showcase.
We also encountered the problem that, due to joint velocity limits, our swing trajectory strategy was not fast enough to hit the puck as hard as we needed to. Our best shots only made it roughly to the marked 1 on the board. In addition, due to the velocity limits, it is unclear how accurate our shot angle control scheme is. Still, we are happy with the overall success of the project, given that the robot was able to play a full game of shuffleboard with no unplanned intervention or hiccups.
Our finished product was most of the way to meeting our design criteria. We completed all the subparts indicated in our design plan and we were successfully able to play a full four-puck game of shuffleboard against the robot. However, while we were able to get the robot to play, it was not able to reliably beat a (mildly-competent?) human opponent as we had wished for it to be able to, nor was it as automated as we would have liked.
Because of the large scope of this project and the time crunch we ran into at the end, we were ultimately forced to hack together certain parts of the project.
Firstly, it took some effort to get the custom end effector functioning. When we first 3D printed it, the screw holes were about half the size that they needed to be. Due to time constraints and not wanting to mismeasure and print again, we spent about 30 minutes widening the holes with a pocket knife. In addition, due to the one-and-done design process for this end effector, the way it sits on the Sawyer gripper mount and how well it opens and closes is highly dependent on the tightness of each of the 4 screws. Given an extra week, we would probably re-print the gripper with slight modifications.
Staying on the hardware side, we'd consider the paint job on the shuffleboard to be a flaw that we would redo if given additional time. It would also be nice to get some paint-on varnish or resin to smooth the board further, although this would require extra money as well.
Because of time constraints, the computer vision section ended up relying on manual selection of board corners, which was functional, but felt somewhat hacked together. Furthermore, when the automatic board detection was working, the way we got the system to ignore the whiteboard on the wall was to tape up sweatshirts and make it less of a big light-colored quadrilateral. With additional time, we would optimize and automate the board detection system.
The actuation part is perhaps the hackiest section of the project. Given extra time, we would have avoided the blind motions preset joint angles that encompass almost every part of the action sequence. Perhaps worse is that under the hood, the setting of joint angles uses os.system("rosrun intera_examples go_to_joint_angles.py") because we didn't have time to glean the important lines out of the example file. The hit motion is similarly hard-coded, and it would have been nice to optimize it a little more to use other joints so it could achieve faster end-effector velocities.
Ultimately, however, the hacks did their job and the ShuffleBot worked better than we could have expected. Many thanks to Alan the Sawyer robot for coming in clutch.