In our design criteria, we assigned ourselves the following objectives:
We did this with great success. Our final solution met all the expectations of our design criteria.
This objective was met with some success. It consistently works perfectly when the hardware doesn't lose focus of the AR tag. We could not avoid this limitation, so our final solution fell just short of our design expectations.
As discussed in the Results section, we were able to mostly meet the criteria for efficient movement when we relaxed the time-efficiency requirement. Consequently, our final solution achieved only mixed success .
Our final solution almost fully succeeded in this case. The precision of our movement patterns was great enough that the path handled slight errors despite pushing the crate with a cylindrical object (offsetting the crate angle). We were able to further mitigate our error by attaching the pusher--the rectangular cardboard placed on the front of the Turtlebot. Thus, we largely met the criteria we set out to achieve.
We proudly succeeded in accomplishing this design task entirely. As demonstrated in the Results section, the Turtlebot is able to adapt to changing and disrupting adversarial conditions consistently. Despite even significant changes in the planning and actuation phase, the Turtlebot still correctly and quickly recalculates the necessary path and executes as expected.
The main difficulties we encountered were trade-offs in performance, struggles with restrictive hardware, and software issues. The trade-offs centered around accuracy vs. speed and consistency vs. movement efficiency. To see the discussion on these, check out the Design and Results sections, where we investigate these trade-offs in more detail. In summary, we prioritized accuracy over speed and consistency over movement efficiency. However, we struggles with many hardware and software problems.
The most problematic issue that we faced was the limited ability of the cameras available to us throughout our endeavor. The Logitech webcams involved in our main setup are low-resolution and poor focus. As a result, the image views we receive sometimes lose track of the AR tags despite the tags being fully in view. Furthermore, the auto-focus feature on these cameras is unreliable, but setting up the focus manually cannot achieve consistent results due to human error.
We tried to mitigate our hardware issues in several ways. First, we attempted to use an HD Microsoft camera mounted to the beam on a window in the lab, giving us a high-resolution view of a larger field. Unfortunately, this camera proved too taxing performance-wise. The refresh rate sometimes drops significantly, causing the camera to once again drop sight of the real-time AR tags on the field.
We also tried to avoid using a webcam altogether, instead trying the Kinect camera mounted on the Turtlebot to look around and find the crate. However, this resulted in poor time-efficiency and a poor understanding of locality with reference to the origin (see: Design). Another unfortunate aspect of using the webcams was the limited field real estate visible; this is simply unavoidable with our current hardware. Finally, the Turtlebot connection would sometimes lag or drop, causing the Turtlebot to overshoot command distances or rotations.
When using the webcams, we had to account for the different sizes of ar tags. On the field, this was not a major issue, but on the board we experienced some inconsistency with the setup. Additionally, we encountered seemingly random extrapolation errors and TF transform errors. Some inconsistency among ROS time objects led to hard-to-find errors appearing mid-execution.
AR Tag 11 had a very unique issue: the mirrored nature of the AR tag would cause the tag to seemingly flip upside-down to our program! This occurred when the edge of the AR tag escaped the view of the field webcam. We solved this problem by printing a new, more contrasted AR tag to use. The edge no longer dropped out of the webcam's view.
On the field itself, the Turtlebot would sometimes block the AR tags for the reference marker or the crate from the view of the webcam. This would cause the Turtlebot to freeze up and wait for the AR tags to return to view, disrupting all processes. This problem was unavoidable, so we instead moved the reference marker to the very corner of the camera view. Similarly, we elevated the crate so that the AR tag on the roof of the crate is never obstructed by the Turtlebot.
The hardware restrictions proved too much to easily compensate for. With better hardware, the level of performance of our project can be improved dramatically. Without this improved hardware, however, there is little we can do about this flaw.
A particular problem we failed to repair was the simultaneous linear and angular velocity controller. Sadly, we could not adequately account for the multitude of unique issues caused by a simultaneous controller, but it is our second priority going forward.
A more major step is boosting fault tolerance. Up until now, fault tolerance is handled by decreasing our Turtlebot's speed and reducing the margin for error. Ideally, we use this increased fault tolerance to add more speed to our actuation phase without compromising precision or accuracy.
And given enough additional time, our stretch goal is to add obstacles to the field. After all, the real world is polluted with noise and non-ideal situations! Incorporating this challenge into the Sokoban field for the Turtlebot is a natural continuation.