Design
Our project aims to design a system for sorting square blocks based on color using Sawyer. Key criteria include precise sensing, robust planning, and efficient actuation to achieve accurate sorting, grasping, and placement. The desired functionality involves properly recognizing block values based on color, determining sorting sequences, and executing movements reliably.
Use Two Cameras: Sawyer right hand + Logitech
Originally, our design included only one camera, the right arm camera on the Sawyer. However, we quickly realized that it only captured pictures in black and white, which would inhibit us from performing color sensing. We therefore decided to use a second camera to capture the color of the blocks, while using the arm camera to capture AR tag position. The tradeoff of this was that we needed to deal with two cameras instead of one, which slightly added more complexity to our project. While we did consider not using the right hand camera, we ultimately decided that it was necessary as the tf transform already existed between the arm and the base of the robot (our fixed frame). This made it much easier for us to calculate the positions of the AR tags relative to the base, allowing us to precisely pick up each block. Had we only used the Logitech, we would need to recalculate the position of the camera relative to the base each time we set up (as well as the transform between AR tags and camera).
Using Both Color and AR Tags
Originally we wanted to track the positions and order of the blocks based on color. However, we quickly realized that with the various lighting conditions in the lab and the similarity between the colors, it was very hard to get accurate and precise initial positions of the blocks relative to the base. Moreover, as said in the previous section, because the Sawyer arm does not have a color camera, calculating the transforms between the blocks, the robot base, and the color camera would have added a lot of complexity and imprecision to our grasping. We therefore decided to use AR tags for precise localization relative to the base (as we did in lab) and use the color of the blocks to find their order relative to each other. While this again introduced a bit more complexity into our pipeline, it significantly improved our actuation as we were able to grab the blocks fairly accurately.
Path Planner and Controller
We experimented with 3 combinations of path planners and controllers:
We ended up choosing the Linear Planner combined with PID for our design for the reasons mentioned in chart. This design made our movements smoother and more precise as compared to the other planners and controllers.
Note: When we say linear planner, we mean that we adapted the linear class used in Lab 7. While this does not ensure 100% that the robot moves in a linear fashion, we were able to control the waypoints between our starting and target positions to constrain the robot as much as possible to a "linear-ish" path, which was good enough for our purposes.