Sensor Fusion was improved in two ways, it was changed to use a Greedy Algorithmic approach when picking which cone observations to merge, and a new distance calculation was tested out for determining if two cone observations reference the same real cone.
The distance calculation is used in the data association portion of Sensor Fusion. Data association describes the problem of determining if two cone observations are the same real cone. The distance calculation find the distance between the two cone observations, if this is under a certain threshold then we assume they are the same real cone. They then can potentially be merged into a fusion cone to be used by SLAM.
Last year's team chose Mahalanobis Distance as it normalizes the data using the covariance to account for any correlation of the data. This is better than using the Euclidean Distance, however it assumes the points are within the same distribution giving them the same covariance. The Hellinger Distance is a more complicated calculation that allows for each cone to be treated as its own distribution. This gives each cone its own covariance - which is more useful for our problem as each time the sensors "see" a cone it is a guess of where the real cone is located.
Mahalanobis Distance - a dissimilarity measure between two random vectors x and y of the same distribution with the covariance matrix S
Hellinger Distance - used to quantify the similarity between two probability distributions, treats each cone as its own distribution
Mu represents the vector position of each cone
Sigma represents the covariance of each cone
Simulator Example
Sensor Fusion has two steps. First, associate and merge cones within the same sensor. Second, associate and merge the resulting cones in one sensor type to the other sensor type. The prior implementation only performed the first step. This resulted in many duplicate cones being sent to SLAM and impacting the car's trajectory.
The new implementation now performs both steps properly and uses a greedy approach to merge cones. On the second step, the distance between all pairs of cones are calculated and added to a list if under the proper association threshold. With this, the smallest distance is chosen first and the two cones are merged. Neither cone is used again, helping to prevent duplicate cones from being created. This is performed in a loop that runs until there are no cones left to merge. The resulting merged cones are sent to SLAM.
The SLAM implementation was improved in a couple of important ways. The first of which is an upgraded implementation of the FastSLAM algorithm, this is a solution to the SLAM problem which was researched and developed by Sebastian Thrun. The new implementation, known as FastSLAM2, accounts for measurements to the landmarks when estimating the new pose of the vehicle at each time step, generating particles that have a higher probability of accurately representing the real environment. Using a particle filter, particles from the previous iteration will be combined with the new location of the vehicle and the vehicle’s observations to create a number of new particles. Including the observations in this process creates more accurate particles and as a result creates a more accurate overall map which is highly desirable in the autonomous racing application. The previously implemented FastSLAM1 implementation only used positional data from the odometry system to estimate the new pose (essentially using the previous pose and the most recent control input), which is simpler but less accurate.
FastSLAM2 pose prediction pseudo-code
Above is the algorithm for estimating the new pose in the FastSLAM2 algorithm. The loop iterates through all particles, creating an initial guess of the new vehicle pose in the first line(St), then performing a series of steps in the following lines adjusting this estimate based on a landmark (cone) observation that the vehicle has made. This estimation of the pose in FastSLAM2 uses the previous pose (St-1), control inputs (Ut), and the measurements to observed landmarks (Zt). The final estimated pose is represented by the symbol Snt,t on the last line. The pseudo code for estimating the vehicle pose in FastSLAM1 is shown below for comparison.
FastSLAM1 pose prediction pseudo-code
Another major part of the SLAM system is data association. This is the step that matches observed landmarks from the sensor fusion system with landmarks that are stored in the map. The previous implementation that the GFR team used was updating each particle individually with each observed landmark. While this conceptually worked it was inefficient and can provide some sub-optimal associations that cause a single landmark to be updated multiple times. To remedy this the flow of the SLAM system was re-structured to process all observed landmarks at the same time for each particle. To process all the observed landmarks at the same time a greedy data association scheme was implemented. This scheme calculates correspondence probabilities for all observed landmark / stored landmark pairs, then selects the highest correspondences as associations. This prevents multiple observations from being associated to a single stored landmark and also creates a more optimal set of associations. The pseudo-code for this greedy association scheme is below.
initialize matrix to store correspondence probabilities (number of landmarks * number of observations) For each observation: calculate correspondence probabilities to all landmarks store list of correspondence probabilites in matrix While not all observations associated or used to add new landmark: find highest correspondence in matrix If correspondence is above association threshold: update this landmark clear this column of the matrix so other observations can't be associated with it clear this row of the matrix Else: add new landmark clear this row of the matrixSector Detection Visualization
To create each boundary using the map generated by SLAM, a cone is selected as a starting point. It is then used to capture other cones by sector detection within a certain angle and a certain distance. Competition rules state the cones can only be spaced within a certain range of distance and angle. This ensures the algorithm will capture at least one cone of the same color, as the parameter settings of the sector detection area are configured based on the competition rules. If multiple cones appear within the same sensor, a few factors are considered to determine the most valid track boundary. These are, the degree of deviation of the cone from the center line of the sector, the distance from the previous cone, and the distance between the formed boundary and an adjacent boundary.