Lab 8

Lab 8: Particle filter in simulation

In this lab you will practice implementing a particle filter for localizing the Cozmo robot on a simple map. You will work in simulation and optionally transfer your implementation onto the robot in a real world map in the next lab.

1. Setting

For this lab, the robot is assumed to be in an arena with markers on the walls that can be detected by the robot. The rectangular arena is discretized into small squares (Figure 1). The origin (0,0) at the bottom left, X axis points right and Y axis points up. The robot has a local frame in which the X axis points to the front of the robot, and Y axis points left of the robot. The markers can appear only on the wall of the arena. The direction that the marker is facing is defined as the positive X direction of markerโ€™s local coordinate frame, and Y points left of the marker.

For motion the robot uses its wheels. It uses odometry to estimate how much the robot has moved after each control input. An odometry measurement corresponds to the current robot pose relative to last robot pose, i.e. in last robotโ€™s local coordinate frame (Figure 2). It has format odom = (dX, dY, dH).

For sensing the robot uses its front-facing camera with a 45 degree field of view (FOV). The camera can only see markers in its FOV. When a marker is visible, the robot can measure its position and orientation relative to the robot. The marker measurement is calculated as a relative transformation from robot to the marker, in the robotโ€™s local frame (Figure 3). At any point in time the robot may see one or more markers or it may see none.

Figure 1: Robot arena and coordinate representations

Figure 2: Odometry measurements

Figure 3: Marker measurements

2. Task Overview

You will implement a particle filter which takes robot odometry measurements as motion control input and marker measurements as sensor input. We will provide starter code in which you will implement two functions: motion_update() and measurement_update(). Both functions have the same input and output: a set of particles, that maintain the belief about the robot pose.

    • particles = motion_update(particles, odom) The input of the motion update function includes particles representing the belief ๐‘(๐‘ฅ๐‘กโˆ’1|๐‘ข๐‘กโˆ’1) before motion update, and the robotโ€™s new odometry measurement. The output of the motion update function should be a set of particles representing the belief ๐‘ฬƒ(๐‘ฅ๐‘ก|๐‘ข๐‘ก) after motion update.
    • particles = measurement_update(particles, mlist) The input of the measurement update function includes particles representing the belief ๐‘ฬƒ(๐‘ฅ๐‘ก|๐‘ข๐‘ก) after motion update, and the list of localization marker observation measurements. The output of measurement update function should be a set of particles representing the belief ๐‘(๐‘ฅ๐‘ก|๐‘ข๐‘ก) after measurement update. Note that the measurement update must include resampling to work correctly.

3. Simulation Implementation

In this lab you will develop and test your particle filter with a simulated robot. You are provided the following files:

    • particle_filter.py: Particle filter you should implemented
    • autograder.py: Grade your particle filter implementation
    • pf_gui.py: Script to run/debug your particle filter implementation with GUI
    • particle.py: Particle and Robot classes and some helper functions.
    • grid.py: Grid map class, which contains map information and helper functions.
    • utils.py: Some math helper functions. Feel free to use any.
    • setting.py: Some world/map/robot global settings.
    • gui.py: GUI helper functions to show map/robot/particles. You do not need to look into details of this file.

You will implement motion_update() and measurement_update() functions in particle_filter.py. Please refer to the particle class definition in particle.py.

When you run pf_gui.py you will see a GUI window that shows the world/robot/particles after start. The ground truth robot pose is shown as red (with dashed lines representing its FOV). Particles are shown as red dot with a short line segment indicate the heading angle, and the estimated robot pose (average over all particles) is shown in grey (if not all particles meet in a single cluster) or in green (all particles meet in a single cluster, means estimation converged).

Two types of simulated robot motions are implemented in pf_gui.py:

    1. The robot drives forward, if hits an obstacle, robot bounces to a random direction.
    2. The robot drives in a circle (This is the motion autograder uses).

Feel free to change the setup in pf_gui.py for your debugging.

4. Notes

  • In this lab you do not have to worry about the robot being kidnapped; it will remain on a continuous trajectory.
  • Particle filter is a randomized algorithm, each time you run you will get slightly different behavior. Make sure to test your code thoroughly.
  • If you need to sample a Gaussian distribution in python, use random.gauss(mean, sigma).
  • The simulated robot adds Gaussian noise to both odometry and marker measurements. The levels of noise are set in setting.py.