Our main project goals focus on having the Sawyer robot play full matches against players, solve puzzles starting from random game states, correctly grasp pieces based on their size, tune the robot’s speed for smooth actuation, and create a GUI that shows the robot's decisions and the game state on a computer screen.
The first key step to the Sawyer robot's ability to derive a move is actually being able to see the board itself, which will be placed on the table directly in front and below the Sawyer robot. In labs, we have achieved this through a custom Sawyer tuck, so that the arm camera faces directly down towards the table. However, there were several critical flaws to the robot's arm camera that prevented us from simply using the custom tuck as a way of sensing:
Binary Image - The Sawyer robot's arm camera only supports two colors: black and white. This means that the darker chess pieces will be shown as black on the screen, while the lighter ones will be shown as white. Where do these pieces stand on? Black and white tiles! It would be extremely difficult to detect white pieces on white tiles, and black pieces on black tiles. Even if we did, there would be too many edge cases to draw a reliable board state.
Camera Distortion - The Sawyer robot's arm camera presents a fish-eye view of the table. This is detrimental to our current model, since a chess-playing robot needs to be able to tell which tile a piece is currently standing on. If we were to use the arm camera, we would have to figure out a very complex and computationally exhaustive method to map every single tile to its pieces. If we were to undistort the camera using cv2, we would be losing a lot of the image space, and parts of the board would be cut out from the camera output.
Piece Type - Identifying the different types of pieces was critical to our end goal of being able to complete chess puzzles not just full games of chess. Some pieces look very similar such as the king, queen, and rook or the bishop and pawn. This means that correctly and reliably identifying what piece is in which square without any overmatching and without missing a piece was a major challenge. Not to mention that slight differences in lighting, angle, or board position could be extremely detrimental to the reliability of the detection without mitigation.
After knowing where every piece belongs on the board, the robot must now make a move. A problem that surfaced here was the integration of a chess engine with the robot's tech stack; attempting to import the Stockfish library in itself resulted in tons of dependencies and collisions with ros components that we had to resolve.
Finally, the Sawyer robot now has to physically pick up and relocate the chess board from a single tile to another one. Due to our chess pieces being 2.5 cm in diameter and 1 cm in height, this requires our robot to be able to move and grab pieces with extreme precision. Furthermore, it must have the ability to avoid collisions with any other piece during its trajectory.
Chess Robots are an exciting way of visualizing a hot topic that has risen in the 20th century - human intelligence versus artificial intelligence. Chess engines are one of the earliest embodiments of man-made intelligence, and videos of grand masters facing off against algorithms with high computational power have repeatedly made the headlines. Instead of sitting the chess masters down in front of computer screens and clicking on a virtual screen, having them sit down in front of the robot arm and playing face-to-face in a physical match raises the tension and improves the storytelling for the media.
Please refer to Results/Videos