The project has two main parts:
Retrieve the live positions of the robot, crate, and block calculate the coordinates of each with respect to pre-specified origin (one for the board and one for the field). With the coordinates, we check whether or not the block and crate coordinates lineup, if not, we can plan the appropriate path to push the crate.
Design Goals:
With the crate destination the robot needs to figure the correct path to push the crate. It is necessary to be aware of the spacing needed to push the crate while staying clear of obstacles and most importantly not unexpectedly moving the crate. The robot should also be able to make small adjustments to ensure the crates position is precise.
Design Goals:
In the worse case scenario for our hypothetical automated warehouse management, the robot might be unexpectedly moved due to human interference or unexpected obstacles. We desire to the system to be robust and reliable, so our robot automatically corrects its trajectory upon being moved from its previously planned path. Through this method, any mishaps can be accounted for.
Design Goals:
Initially the plan was to model the field/board as actual levels in Sokoban, but we found ourselves severely limited by our hardware setup. The biggest problem was the resolution of the webcam combined with our limited ability to get a good view of a large portion of the ground. Without a large enough view of the field, we were unable to place multiple crates or obstacles since the Turtlebots couldn't go out of frame of the camera since its AR tag was essential for navigation.
In attempts to get more visibility, we retrieved a tripod to raise the webcam with a USB extender higher and tried angling it. If the webcam was too high up, the resolution couldn't keep up with details on the AR tags that were farther away. If the camera was too tilted, the resolution also was unable to pick up the AR tags.
A big consequence of the hardware limitations was how quickly we could move. With how unreliably the camera detects the AR tags, our robot couldn't run too fast since, if the camera missed the new position of the turtlebot, the previous input command for the turtlebot would run for the next few seconds (until the AR tag is finally detected) causing the turtlebot to constantly slightly overshoot waypoints . This meant we either had to slow it down, or increase the tolerance for navigating to the waypoints. Losing accuracy would be extremely detrimental to pushing the crate, because there is inherently some error when using the turtle bot to push the crate. Stacking the error between the turtlebots position would double the variance in the crates position so we opted for slower but more reliable approaches.
Another major choice we made was to modify the proportional control to only output exclusive angular or exclusive linear controls. More details can be found in Implementation: Exclusive Proportional Control.
The biggest design choice we made was in terms of how we were to retrieve the coordinates of all relevant objects. One of the first things we thought of was using computer vision to detect the small scale blocks on the board and using the kinect to detect the location of the crate. Upon exploring the different ways to utilize the pointcloud data from the kinect to locate the crate, we found quite a few short comings that deterred from using it.
First of all when it came to locating the blocks on the small-scale board, we found the use of computer vision to be unnecessary. Our main goal in the project was to explore the optimizations of navigating the robot in a warehouse environment and the small-scale board was just a sort of controller for this. AR tags were plenty efficient, and gave reliable and consistent coordinate data.
Using the kinect for the robot's crate detection was a far more interesting problem, but there severe issues with latency. While trying to use proportional control combined with crate location utilizing kinetic data, there significant amounts of lag and imprecise movement due to the different silhouettes of the crate from different angles (it was a rectangular crate). It was hard to determine whether the crate's position's inaccuracy was due to lag or bad handling of the pointcloud data. Another big issue was finding an origin point for the field from solely the turtlebot's sensors. Placing a vertical AR tag seemed like a plausible solution, but would have led to weird mishaps while the AR tag was out of view. Also the crate could stand in the turtlebot's field of view, blocking the AR tag.
At the end of the day, there were many downfalls to using computer vision when precise coordinates were needed, so we opted to use AR tags.