If a person was blind folded, spun around until they lost their orientation, and then tasked with removing their blind fold and running through an obstacle course they would first have to observe their surroundings and figure out where they are before proceeding around the course. In the very same way an autonomous race car needs to observe its surroundings and figure out where it is in relation to these surroundings before it can determine where to move in pursuit of its goal of driving around a track. This is essentially the task that the software designed for this project is designed to accomplish. In the context of this project the observation of surroundings can be thought of as "mapping", the figuring out where the car is in relation to the surroundings is "localization". Lastly, actually driving around the track that was observed is the boundary estimation and trajectory planning part of the project.
The details below explain the various pieces that were implemented to afford an autonomous race car the ability to accomplish the tasks discussed above. Described is the overall system architecture of the driverless car, where the pieces developed for this project fit into that, and details about the specific pieces themselves. For more detailed technical specifics please refer to the design page.
Above is a diagram showing all parts of the overall software system on the autonomous racing vehicle. Our sections of the system are circled in red.
The software for this project is all implemented in C++, each section of the system being a ROS (robot operating system) module. ROS allows for communication between the modules using a publish-subscribe architecture. The vehicle has camera and lidar sensors that identify cones which act as the boundaries of the race track the vehicle needs to navigate. This data is cleaned and consolidated in Sensor Fusion module and passed to the SLAM module. SLAM uses the cones to update the system's estimate of the car's location and a map of its surroundings. The location and map are sent to Boundary Estimation and Trajectory which use the cones to create edges of the track and plan a path for the car.
Diagram above shows the black circle of the car with the green dot being the real cone. The red ellipses symbolize two different observations of the same cone. These should be associated and merged together.
Sensor Fusion merges the Lidar cone observations and camera cone observations to make "fusion" cone observations that are then sent to SLAM. The car uses Lidar and camera sensors to find cones on the track. These cones are used by SLAM to update the map it makes of the car's location and surroundings. This map is then used by Boundary Estimation and Trajectory to calculate a path for the car. To do this, sensor fusion must identify which cone observations found by Lidar and camera represent the same real cone.
Sensor Fusion calculates the distance between all the pairs of cones possible from the camera and Lidar cone observations. If the distance is under a certain threshold, they are determined to be the same cone and merged together - making one observation obsolete and the other observation is updated based on the position and covariance of its partner. The resulting fusion cone observations are then sent to SLAM.
Visualization of an actual map vs map generated using SLAM. Red ellipses are estimates of landmarks, and the blue line is a sequence of poses.
The SLAM system on the autonomous vehicle gathers data as the car moves from the sensor fusion and the odometry systems on the vehicle. This data is used to create a map of the car's surroundings and determine its own location on the map. Since this project is limited to the scope of an autonomous race car operating on a track marked by cones, the cones will be the landmarks from which the map is constructed.
SLAM can be broken down into a few steps the first of which is pose prediction. SLAM uses a number of particles each of which is an estimation of the map and the vehicle location. For each particle at each time step, the vehicle pose is updated using landmark observations and updates from the odometry system. Next, the landmarks in the map are updated using the observations from the sensor fusion system. This is done using a greedy association scheme to match existing and observed landmarks and Kalman filters to update the location of the landmarks. Lastly, all of the particles are "weighted" which essentially scores how well they match reality. Some of the particles are selected to be used in the next iteration and the worst scored particles are removed.
Visualization of determining the sequences of cones for the track boundaries.
Boundary Estimation and Trajectory uses the map of the environment and car location generated by SLAM to determine the track boundaries and plan the trajectory for the vehicle to follow. Once the trajectory is planned it is sent down the system Controls and Actuators team. There it is used to generate the necessary controls to ensure that the vehicle can follow the planned trajectory.
The boundary estimation builds two boundary trees to store the cones based on the color and location differences. By calculating whether different paths conform to the requirements established for track dimension specification, the system finds the two sequences that are most likely to be the boundary. During this calculation process, the midline of the track is also generated and used as the trajectory for the vehicle.