Particle Filters - Monte Carlo Localization
Particle Filters - Monte Carlo Localization
Particle filters—also known as sequential Monte Carlo Localization—are a widely used family of algorithms for estimating the internal state of dynamic systems that evolve over time, especially when those systems involve nonlinearity and uncertainty. The central idea is to represent the possible states of the system using a set of “particles,” each of which captures a hypothesis about the system’s current state. As new observations (sensor readings) are obtained, each particle is updated for consistency with these measurements, and particles more consistent with the data are given higher importance. Over time, this approach lets the filter maintain a probability distribution over all plausible states, even when direct observation is impossible or noisy.
The particle filter operates in three main steps, repeated at each time interval:
Prediction: Each particle’s state is updated based on the system’s motion or process model (e.g., the robot moves according to odometry readings).
Correction: A sensor model assesses how likely each particle’s state would predict the observed sensor data, assigning a weight to each particle reflecting this likelihood.
Resampling: Particles are resampled (with replacement) based on their weights, so that particles matching the observations survive and proliferate, while unlikely hypotheses are discarded.
Monte Carlo Localization in Wean Hall
In this project, I implemented global robot localization using a particle filter for a mobile robot navigating in an indoor environment (Wean Hall). The robot is initially “lost,” with no knowledge of its starting position. Its only information sources are noisy odometry data and laser rangefinder measurements, along with access to an occupancy grid map of the environment.
The core components implemented include:
Motion Model: The robot’s predicted movements are simulated for each particle using the odometry motion model, capturing uncertainty in both translational and rotational movement.
Sensor Model: The likelihood of each particle’s state is computed using a probabilistic beam measurement model tailored for the laser rangefinder; this involves simulating expected laser readings (via ray-casting) and comparing these to the actual observations.
Resampling Strategy: I applied low-variance resampling to reduce estimator variance, ensuring more diverse and representative particle sets.
For computational efficiency and effectiveness, I initialized particles only in free areas of the map, subsampled the laser scan data, and vectorized key parts of the implementation. Special care was taken to manage numerical stability during likelihood calculations (e.g., working in log-space).
The implemented particle filter demonstrated the ability to robustly localize a lost robot in complex, ambiguous indoor environments. Over time, the distribution of particles converged around the robot’s true position as more sensor data was incorporated. The system was tested on multiple data logs, producing accurate trajectory estimates and consistently aligning with ground truth path data.