Robot Controller Implementation
Initially, we strictly used a P, I, and D constants for our PID controller, as usual. However, we found that the movements of the Robot Arm itself came out very slow. We tried updating the PID values, but this wasn't enough. Instead, we added the feedforward controller to our PID controller as well, so our final implementation used a FPID controller. This helped a lot with speed and accuracy. In the end, with some PID value tuning and the feedforward controller implemented, we were able to make our robot accurately move to desired positions.
Scanning Implementation
Our initial idea was to scan the entire table from a high point of view. This did not work, as from such a far distance the camera on the robot arm was not able to properly scan and identify the AR Tags. We tried moving the arm closer, however while this allowed the blocks to be scanned, we could now only see a small portion of the table. So we had to make some changes.
Block Scanning Setup
Prior to scanning our blocks, we had to find optimal positions to position the Sawyer arm such that we would be able to maximize the efficiency of the wrist camera for the AR tag scanning procedure. To this end we:
Zero g'd the Sawyer arm to allow for free movement and moved it to a desired position above the table
Once we were in agreement with the position we then obtained the position using rostopic on our local terminal
We then created a custom launch file for each of three scanning positions
Block Scanning Process
Afterwards, we embed these custom scanning positions into our code and our process, while simple, was as follows:
Control the Sawyer arm to move to the left, middle, and right desired positions all the while scanning any ar tags within view
For every tag that our wrist camera would capture, we would then use the lookup_tag() function to extract the position and orientation of this AR tag and save this information, along with the AR tag identifier, into a global list of "seen" AR tags.
Main Logic Implementation
As mentioned in the Design section, our initial idea for the control logic was too simple and did not work properly due to the errors of the real world. Oftentimes, when placing the block onto the tower, the block would be placed in such a way that any blocks placed on top of it would end up toppling the tower. This was because of the small errors that could occur in both the pick up and placing phase of the robot. If a block was picked up at a weird angle, or placed in such a way that the grippers cause it to shift on top of the tower, it could end up very off center from the desired position. We tried to make our pick and place as accurate as possible, but these small errors would add up and make our tower topple before it could gain any substantial height.
For our final implementation, we wanted to make sure the tower remained stable as it was built. To do this, we changed our control loop to make sure we scanned the tower whenever a new block was placed. We could then determine, based on this new block's position and the desired position, whether this block was not stable. This allowed us to re-stack blocks that would otherwise result in an unstable structure. Illustrated is the final control loop for our program, with each step labeled. The code itself can be viewed here: https://github.com/AbhiAlderman/106a_Project
Hardware
Our project did not rely on much hardware outside of what was already available in the Lab. We mainly utilized the Sawyer arm along with the camera already present on the sawyer.
The only necessary hardware were the wooden blocks along with the AR Tags to use as blocks to stack. As mentioned in our Design page, we originally wanted to use blue 3D-Printed blocks to stack. We were going to rely on color sensing to detect our blocks as well, but this proved to be too inaccurate. We also found that when picking and placing the 3D-Printed blocks, the plastic material made the blocks slightly slippery, which caused some attempted pick ups to drop the block entirely.