We provide a plot of Position and Heading Error vs. time for all robots. Regular landmark initialization was used for all robots to make this plot. Note that the trace labeled 'odometry' describes the error of each robot if it did not do any correction.
As expected, scenario 1 and scenario 2 significantly outperform the naive predict-only odometry error. The performance improvement of scenario 2 is negligible compared to scenario 1. Scenario 3, where Robots 1, 3, 4, and 5 are basic and robot 2 is advanced, shows that our CI update formulation is able to slightly improve the position estimation performance of basic robots versus straight odometry, but scenario 3 does not improve heading error.
A table is provided to evaluate the performance of the different scenarios, using regular landmark initialization. The error metrics used in the table are of Mean Position Error and RMSE Heading Error.
The table confirms what is displayed in the plot. In particular, the performance improvement of each robot in scenario 2 is negligble compared to that in scenario 1. Furthermore, while scenario 3 has better position performance than odometry alone, it still performs much worse than scenarios 1 and 2. The heading performance of scenario 3 worsens compared to odometry for robots 4 and 5, is exactly the same for robot 3, and was not better much better than odometry for robot 1.
Intuition into the performance of the state estimators can be gained by examining the visualization animations provided below and their associated discussions.
Mean Position Error (MPE) and RMSE Heading Error are calculated as follows for a state estimation sequence with N iterations. Variables with a hat denote the estimate. The variable x in this case is a vector of x and y position.
Animations are provided to visualize the performance of the state estimation for all scenarios. The animations corresponding to the results of the table and error plot are shown in the left column (regular landmark initialization).
Scenario 1 was evaluated in the cases with and without ground-truth landmark initialization. Animations of the performance are provided in Figures 1 and 2.
All animations are sped-up from real-time by a factor of 20x. The estimated uncertainty of a robot's position is given as a 95% convidence ellipse centered around the robot's position estimate. The robot's heading estimate is provided with a black line at the center of each confidence ellipse. Each item on the plot is labeled with a "Subject Number" in accordance with the dataset. Subjects 1-5 are robots, subjects 6-15 are landmarks.
The 95% confidence ellipses of the landmarks are also plotted in orange.
The confidence ellipses of the landmarks are not plotted. Each robot's estimate of the landmarks are plotted as markers with the same color as the robot.
Scenario 2 was evaluated in the cases with and without ground-truth landmark initialization. Animations of the performance are provided in Figures 3 and 4.
Because, as discussed in the methods section, covariance intersection only fuses two measurements when one confidence ellipse does not fit completely into another confidence ellipse (centered around 0, see the CI widget for intuition), there are very few instances where a robot updates the position of another robot. In particular, the combined uncertainty of the measuring robot's position + measurement error is likely to be much greater than the position uncertainty of the measured robot.
The position of robots 1, 4, and 5 are updated using CI on multiple occasions. Robot 4 is updated the most, with an omega going down to 0.68. A value of omega=1 indicates that only the position estimate is used, and omega=0 indicates that only the other robot's measurement is used.
One notable example seen in Fig 3. is robot 3 updating the position of robot 4 at 108 seconds in dataset time (~5 seconds video time). In this case, omega=0.76, meaning that the fused state estimate is still favoring robot 4's original position estimate.
Scenario 3 was evaluated in the cases with and without ground-truth landmark initialization. Animations of the performance are provided in Figures 5 and 6. Robot 2 is the advanced robot, all other robots are basic. If the odometry runs on its own for long periods of time, the position estimation of each basic robot no longer falls within the 95% confidence ellipse.
Notice how after an update step where the measurement from robot 2 is used, the position estimation of the basic robot is often correct, but the heading is still incorrect. This causes the estimate to quickly diverge from reality again after the basic robot is no longer in the field of view of the advanced robot. This indicates that a cooperative localization system with basic robots needs a way to correct the heading of those basic robots. A discussion of a potential heading correction step for basic robots using no extra hardware is discussed in the conclusion section.
Robot 2 is the advanced robot, the others are basic.
Robot 2 is the advanced robot, the others are basic.