Overview
Our project consists of three main components.
1) Block Scanning: Scan the AR tags on the blocks to determine the position and orientation of each block relative to the robot arm.
2) Robot Control: Use the found position and orientation of each block as the target for the robot arm to move to.
3) Hardware: Blocks for the Robot Arm to pick and place. AR Tags for the Camera to scan.
Block Scanning:
A goal we established very early on was to have our camera accurately scan blocks to identify both their position and their orientation. Initially, we wanted to use blue 3D-Printed blocks and use color sensing to find the positions of the blocks. However, this was not enough for our project, as the position we got was not entirely accurate, we could not get the distance from the block to the arm using color alone, and we could not identify the orientation of the block very well. Things got especially bad when blocks were placed together, which was an avoidable part of our project, so we scrapped the color idea entirely and switched to AR Tags.
Using AR Tags, we could easily scan the AR Tag for each block and receive information about its position and orientation. We started by scanning the whole table, but this proved to be a problem, as explained further in the Implementation section.
Robot Control
Robot Controller
We opted to use a PID controller rather than using inverse kinematics or strictly a feed-forward controller. We thought this would result in better control over our robot, and better movements between positions. However, we had to make some changes for our final design.
Main Controller Logic
Our initial control loop was very simple. We simply wanted to pick a block, place it on the top of the tower, and repeat, until all blocks were placed. However, this did not work as well as intended. If all blocks were luckily able to be placed steadily as the tower was built, all worked well. But in the real world this rarely happened due to robot error, block slippage from the grip, friction, etc. So we had to make changes for our final implementation to work.
Hardware
Blocks & black cardboard paper
The wooden blocks were manufactured from scrap wood using a wood saw machine and a sanding machine. Different dimensions were tested and eventually, 5 x 5 x 3.8 cm3 blocks were selected (optimal size for detecting and picking). Originally we started off with 3D printed blocks, however, we realized that the material for these blocks was rather slippery and not suitable for the Sawyer gripper and this, along with the color sensing issue mentioned before, ultimately pushed us to make the change to wooden blocks.
AR tags were generated from the ROS ar_track_alvar package. They store real-time geometry messages (positions and orientations). We tested different ways to place the AR tags on the blocks and eventually, we put tape in between the tag and the block to ensure that no glare would be present to affect the wrist camera's visibility.
Black cardboard paper was also used to enhance the arm's visibility (larger contrast between the tags and the background).
Rethink Sawyer Arm
Sawyer right arm with 7 joints
These are only some brief descriptions of the sawyer arm. For more information, the link for the sawyer arm product is included below: