In a multi-robot situation, where the cost per robot is a key limiting metric, it is expensive to put localization and mapping sensors on every robot. We consider two kinds of robots, then, in a multi-robot Simultaneous Localization and Mapping (SLAM) setting:
Advanced Robots have access to sensor data describing the (range, bearing) to all landmarks and other robots
Basic Robots do not have access to any onboard sensor data, only odometry information. They perform an update step with the odometry information, but can only perform a correction step if an advanced robot measures its position and communicates it.
Three different scenarios of cooperative localization are investigated:
In the first scenario, all robots are advanced robots, and no robot communicates with another robot. This is the control case, where there is no cooperation. Each robot uses only landmark information.
In the second scenario, all robots are advanced robots, and the robots communicate with each other. When one advanced robot measures another advanced robot, it communicates its own position estimate, and the measured range and bearing to the other robot. The other robot uses this information in a correction step. This is a best-case scenario of cooperative localization where all robots are advanced.
In the third scenario, only one robot (Robot 2) is an advanced robot, and the rest of the robots are basic. All basic robots perform only update steps, unless Robot 2 measures the other robot's position. In that case, the basic robot performs a correction step.
EKF-SLAM is used as the base SLAM algorithm. Scenario 1 uses vanilla EKF-SLAM. When robots communicate with each other, however, the correction step is replaced by the Covariance-Intersection algorithm, because the assumption of independent uncertainties made in EKF is violated.
SLAM is implemented with an Extended Kalman Filter (EKF) in the base case [1]. The state estimate consists of 33 items:
(robot x, robot y, robot heading, landmark 1 x, landmark 1 y, landmark 2 x, landmark 2 y, ... landmark 15 x, landmark 15 y).
Which makes the covariance estimate a 33x33 matrix. The EKF is implemented as follows:
Predict. At each time step, a robot updates its position estimate using odometry information. The state propagation function is nonlinear. A linearization of the state propagation function is used to propagate the uncertainty of the state. The landmark state estimate does not change in the prediction step.
Correct. If a measurement is available at that time step, the robot corrects its position estimate using the measurement. Note that because landmarks are not always in view of the camera, even in the base case, the amount of "correction steps" is variable at every time step. If no landmarks are seen, there is no correction step. If 3 landmarks are seen, the correction step (e.g. Kalman gain calculation and fusion) happens 3 times. The traditional EKF correction step is not used in the case of robots measuring robots. For a discussion of this, see "The Covariance-Intersection Algorithm".
Two different methods for initializing landmarks are explored.
The traditional default method initializes landmarks as they are seen for the first time. When a landmark is first seen, the position of the landmark is set to where the robot measures it to be. The uncertainty of the landmark includes the uncertainty of both the robot's position and measurement error. If a landmark is first spotted far away from the robot, then, the uncertainty in its position is higher. When a landmark is seen for the first time, no correction step is performed using that landmark's measurement.
The ground-truth initialization method initializes landmarks to their ground-truth position with a small arbitrary uncertainty. This is to test if the performance of cooperative localization under harsh conditions could be improved by having known initial landmarks.
A crucial assumption of the EKF correction step is that the uncertainties of the two estimates being fused are independent. This is normally the case, since the error in the measurement of a landmark is independent of the uncertainty of the propagated state (which consists of the errors of odometry data fused with errors of previous measurements)
When a robot takes a measurement of another robot, however, this assumption is violated. In particular, if robots are using each other's measurements to correct their position, then the uncertainty in their state estimation is correlated with uncertainty in the other's state estimation [1]. For this reason, another sensor fusion technique has to be used.
The Covariance Intersection algorithm provides a method of fusing two measurements with unknown uncertainty correlations.
The equations to the left show the covariance intersection algorithm for the sensor fusion of two state estimates, such as a and b, with uncertainties A and B. The result produces estimate c with uncertainty C.
It is loosely analogous to the convex combination of the two vectors.
Note, however, the inclusion of the scalar term omega, which determines the weighting of the two constituent estimates in the combination. The value of omega needs to be calculated by optimization. In particular, the optimal omega needs to be found to minimize the trace of the output covariance C. In our brute-force implementation, the optimal omega is found by evaluating the trace of the resulting C matrix for 50 candidate omega values between 0 and 1.
If omega is 0, the result is identical to that of the b estimate. If omega is 1, the result is identical to that of the a estimate. In practice, this often occurs. Since the errors of the constituent uncertainties are correlated, if the shapes of the covariance ellipses are similar but with different scaling, the Covariance Intersection algorithm chooses to completely use the estimate with the smaller covariance.
A Covariance Intersection Widget is provided to gain an intuition of how the Covariance Intersection sensor-fusion works, and how it differs from a traditional Kalman sensor fusion. In particular, note the graph of the trace of the resultant covariance matrix vs. omega.