The initial design of this project was optimized for simplicity and development efficiency. We envisioned four components, which could be worked on in parallel by the four members of our team:
homography-based computer vision, to get the colors and locations of all the pucks in the scene (i.e., the board state)
shot selection AI, to identify the best shot given the current board state
a shuffleboard simulator, to model friction and puck dynamics and for training the AI
a physical shuffleboard setup and shot actuation cycle, to execute AI-planned shots in real
The main trade-off in this design process was simplicity versus ideal effectiveness. This balance was present in many areas of our project. For example, initial tests showed that the most reliable and effective way of manipulating the pucks for a shot was a single joint "kicking" motion. While a traditional hold-slide-release motion might be more realistic and fine-grain controllable, we decided on this motion because it was most likely to be successful and predictable over the expected distribution of shot plans.
In each subpart, we had a set of tasks that could be both automated and done with human intervention. While we aimed to implement autonomy for as much of the game playing process as we could, we understood that fallback options would be desirable to ensure viability given our time constraints. This design methodology helped us on multiple occasions, for example, we planned to have the robot automatically detect the board corners and perform the homography. However, when that failed during our presentation due to too much noise, we quickly took over manual control to select the corners of the table. With more time and resources we could have developed a more robust computer vision solution that would be able to reliably get the homography on its own, but given the contraints that we were working with, this redundancy allowed us to create a successful shuffleboard playing robot with ample room for improvement.
Most of the hardware required for this project was readily available for us in the form of the Sawyer robot, which was chosen over the Baxter for its improved levels of precsision and maneuverability. While a custom robot could have been built in order to provide highly tuned impulse vectors to the pucks, we made the decision to use the Sawyer since we likely did not have the mechanical ability across the team to construct any complex machinery within the allotted amount of time. Conveniently, restricting ourselves to having to use a robot arm instead of a simple shooting mechanism allowed us to apply more of what we learned in 106A.
We did however need to build a custom gripper (CAD available under Additional Materials) to pick up the pucks and move them from the side of the board to the shooting position. This custom gripper also gave us a flat face to use in contacting the puck during hitting motions. The gripper is 3D printed and designed to screw into the Sawyer gripper finger mount points (numbers 3 and 5) such that it can open and close to pick up the pucks. We designed the end-effector in a way that we only needed one half-gripper CAD model printed twice. We also added a thin layer of hot glue around the inside gripping surface to increase friction and facilitate grasping.
Another custom piece of hardware that we had to build was the shuffleboard, since buying a pre-built shuffleboard could easily run over $1000. Our shuffleboard was made from 3/4" plywood which we had cut at the Jacobs Hall Makerspace to 18.5 inches by 8ft. For the final tests, the board had score delimiters 1/4" wide with 4 inches of space in between them. We then spray-painted score markings on the board; this took a couple of attempts to get right, which is why you will see different board patterns in different images. We tried spray polyurethane to improve smoothness, but it ultimately made the surface rougher, so we scrapped that idea. We then applied shuffleboard powder wax to enable the pucks to slide.
While we had originally planned to use the MoveIt IK solver to transition the arm between key positions, we found that it had trouble performing these transitions reliably. Furthermore, it tended to leave the arm tin awkward joint configurations from which it was near impossible to execute the precise motions that the controller needed for picking up the pucks and hitting them. As a result, we decided to store joint configrations for a sequence of key positions including imaging, calibration, home, puck 1, puck 2, puck 3, and puck 4 as well as a series of hitting positions that would get the arm close to the desired hitting position. When playing a game, the sequence of joint configurations is chosen to ensure the arm does not make any collisions with the table, although there is no feedback or active planning around it. When the board is set up correctly, we found that moving between fixed joint configurations provides the most reliable gameplay. The joint positions are chosen quickly and efficiently by free-driving the arm while running src/move_arm/src/pose_saver.py to construct a yaml file of named joint configurations.
In order to pick up the puck, drop it off in a desired location, and perform the hitting action that sends the puck down the board, we developed a separate joint velocity controller. The controller works by taking in the current and desired end-effector position and computing a straight path trajectory that can be traced out by a series of joint velocity commands. These commands are computed by converting a velocity vector in the state space to a series of joint veliocities using the pseudo-inverse of the spatial jacobian. We then apply proportional feedback in order to scale the velocity vector and achieve the desired goal position of the end effector. Since we reserved the use of this controller for small adjustment motions, we avoid issues with singularities that would otherwise cause this controller to fail.
Calibration Pose
Home Pose
Puck Retrieval Pose
Puck Hitting Pose
In order to strike the pucks, we use the last two joints of the robot. The wrist roll joint is used to set the direction of the puck's exit velocity while the wrist flex joint is used to swing the end effector toward the puck and apply the desired impulse. Combining the angle of the "hammer" induced by the wrist roll joint and the veloicty generated by the wrist flex joint, we can easily control both the magnitude and 2D direction of the initial velocity vector of the puck.
The final version works as follows. Before starting the game, we move to the "calibration" position using src/move_arm/src/to_poses.py and align the corner of the board with the closed gripper. Then, each time the AI calls a function sb.perform_shot, the robot executes the following action sequence:
Move to the imaging position (arm in a position to not occlude the board). Take an image of the board.
Move to the home position.
Move to the puck pre-grasp position.
Using the custom controller, move downward 4cm.
Grasp the puck.
Move upward with the custom controller, then back to the home position.
Move to a hitting position selected by the AI.
Place the puck.
Lift the arm, wind up the last two joints of the arm to the desired positions.
Bring the arm down and swing at the velocity requested by the AI.
The first step in our pipeline was a CV analysis of the physical shuffleboard. At the beginning of each turn the robot takes a snapshot of the board. Once we have an image of the current board state, processing involves finding the shuffleboard corners by detecting the largest contour in the off-white color range of the board. We can then use the corner positions to compute a homography transform from the camera's perspective, to a top-down perspective.
With this warped image we can find precise puck locations by filtering for red and blue respectively and then finding the centroids of each detected object. Lastly, we can convert these centroids in pixel coordinates to real world distances from the bottom right corner of the board. We do this by using the known dimensions of the board to figured out the px/meter ratio. Once we have the position of each puck, the coordinates are sent down the pipeline where they instatiate the "digital twin" board that the AI can run simulations on in order to choose a game winning shot.
In order to develop the AI and test components of our software without hardware we developed a simple Python simulator that simulates the sliding and collision of pucks. The simulator uses a discretization scheme that integrates the state vector derivative over small time-steps. The derivative is computed as a function of the state which takes into account the frictional forces for each puck and checks to see if any of the pucks are in collision. If there is a collision, the proper forces are computed such that a perfectly elastic collision takes place. We found this simple simulator to be sufficient for predicting puck motions and validating the planning software.
In order to determine where to shoot the next puck, the AI considers the current position of all pucks on the board and then simulates a grid of possible shots (~1400). The grid of simulated shots is determined beforehand to only include "good" shots, meaning that they end close to score zones and not off the table. There are almost no scenarios where other shots would be optimal so limiting it in this way allowed us to minimize the time needed to calculate the best shot. After all shots have been simulated, the score of the simulated final environment is calculated along with heuristics intended to capture how good the shot may be in the future. These heuristics include the number of pucks for each team on the board and a secondary scoring method. As the game progresses, the relative importance of the heuristics and score is modified due to the changing importance of future shots. Next, Gaussian smoothing is used over the grid of shot "values" to account for the imperfections in reality compared to the robot, reducing the chance the robot will take a very precise yet risky shot. Finally, the best available shot (or a random best choice if there is a tie) is sent to the robot for actuation.