Mock dynamics were generated using a randomly generated grid to simulate terrain visual inputs from the Turtlebot. Experiments were then run until convergence to the optimal path with no knowledge of the true dynamics grid. The following paths are from our planner where every grid cell’s dynamics are determined independently.
Chosen path of 8 iterations, black is the final and optimal
Estimated cost of the chosen plan from the perspective of the planner
Difference between the true cost of the chosen plan and the true cost of the optimal plan
Now we use a dedicated number of possible dynamics values (10-20) one-hot encoded to simulate common terrain features both observed at once and across training/execution sets. This converges in 2-3 iterations for problem sizes tested, and the runtime was significantly faster.
Left: Epistemic uncertainty map of first simulation experiment after 1 iteration, 0.1 is most certain. Right: Epistemic uncertainty map of second simulation experiment after 1 iteration, 0.1 is most certain.
Test on 20 dynamics values (between 0.2 and 1), on a 20 x 20 grid
Converged after just 2 iterations
Average error of estimates reached about 0.15
Test on 20 dynamics values (between 0.2 and 1), on a 10 x 10 grid
Converged after 2-4 iterations
Average error of estimates reached about 0.17
In sim, the dynamics in these scenarios are consistently underestimated, possibly due to the nature of how the program simulates physics. This typically still converges to a solution but resolving errors like these will increase robustness.
The execution of the trajectory deviates from the chosen path, while the PD controller is meant to smooth the trajectory given by Dijkstra’s. This could pose a possible concern for safety, as the robot may veer into undrivable terrains.
The first experiment we performed had us manually slowing down the robot. We held it back to simulate rough terrain. As the graphs show, the robot identifies lower D values, indicating slower speeds.
For this experimental setup, we had a rough patch of terrain set up at the end of a smooth wooden block. The robot travels along the gravel and measures higher aleatoric uncertainty.
For our final real experiment, we wanted to test out path planning. We instructed the robot to alternate between two motions, navigating to a goal position and returning to its starting point. It builds an evolving model of the world as it accomplishes these actions. That model allows it to better decide which route to take, as it accounts for the varying terrain uncertainties in the manual environments we created while using Dijkstra's.
The robot’s reconstructed homography mostly matches the image from the previous run/motion, despite jerks in Kinect and robot movement and the inaccuracies of the sensor.
As the robot explores its surroundings, the D values converge. There is a large red region in the second motion because the robot moved rapidly by keeping one wheel on the edge of the wood board and one wheel off. Our visual features cannot account for this edge-case, so the knowledge was not preserved for the third motion.
These are some path plans created. The first two images are from when the robot explores new terrain (different experiments). The third image is the 3rd motion, where the robot decides it is faster to exploit the terrain it knows.
The homography-based terrain mapping becomes noisy when the robot shakes. The Kinect sometimes gets angled downwards, messing up the homographies.
The Turtlebot sometimes follows jerky trajectories, causing the map to become inaccurate (up & down tilt varies).
It is very difficult to find terrains in the lab that the turtlebot is not good at driving on, so we had to simulate difficult terrains with a human pulling on the robot.
In both sim and real, we were able to accomplish terrain-aware system identification and planning, finding areas with undesired dynamics or higher uncertainty and planning around them.
Both feature mapping and path planning were able to be conducted simultaneously, allowing for functionality in unknown regions.
Imaging and control needs more tuning for consistency, but we manage to provide a proof of concept for this method.
While we apply this concept to wheeled robots, the principles involved are generalizable to other robotics systems which need to be context-aware.
Future work could involve more rigorous testing of the platform in different terrains, like on sand and grass.
Further, experimenting with different types of robots (non-wheeled) terrain-aware system identification would yield interesting results.
Implementing control barrier functions can help perform path planning in a wider environment where Dijkstra’s cannot work as effectively.
Integrating LIDAR information could help create a better map of the environment and more accurate feature vectors. This might become particularly helpful in areas with different inclines, where the horizon line might not remain consistent to perform homography, and dynamics equations change.
Finally, with more data and compute, we may be able to use neural nets as opposed to kernel regression.