In this lab you will implement a Particle Filter to localize your robot within a global coordinate frame fixed within a known map. You will leverage a 2D scanning lidar as your main source of exteroceptive sensing.
The goals of lab 04 are to design a Particle Filter and use it for online and offline robot localization experiments. The corresponding lab 04 subgoals are listed below:
Lidar setup
Map Creation
PF Design
Offline localization experiments
Online localization experiments
There are three deliverables for lab 04 which must be submitted on Brightspace.
Lab 04 Video - that documents at least 2 different online localization experiments.
Lab 04 Report - that documents all four subgoals.
Lab 04 Code - your particle_filter.py file.
The new base code can be found here on our repo on GitHub. There are several key changes to the basecode.
robot_gui.py - The code now brings in a lidar stream. The lidar feed will be displayed in your GUI. You can comment out the display part if it runs slow on your computer. Keep it at the beginning for debugging.
particle_filter.py - This file is where you will add most of your code. Functions are created for you to fill in. Add more functions as desired. At the bottom you will notice a function for offline PF that loads in your data and iterates over it for state estimation.
robot_arduino_code_ino - This file has been updated to collect data from the lidar and send it to your laptop. We should be concerned this may slow things down.
data_handling.py - There are some additional plotting and data loading functions.
For this lab, you will need to use a lidar to provide relative range and bearing measurements of your robot with respect to mapped walls and objects. Be sure to pick up your lidar from Prof. Clark ahead of time.
Step 1: Remove your top deck of the robot with a screwdriver. Next, screw on your lidar using screws of type M2.5 (although some people report M3). Find some in your Osoyoo car chassis kit.
Step 2: Check your cables in the thin cardboard box that comes iwth your lidar. Ideally they look like those in the image directly below:
If your lidar cable doesn't come with the cables above, it will come with the cable below. The cable has at one end, two white adapters - one 3 pin and one 4 pin,(see left of image). Do NOT modify this end of the cable since it will plug into your lidar.
The cable also has one end with a single white 7 pin adapter, (see right end in image below). You will need to cut off this end, and replace it with 7 individual jumper cables. After cutting off the 7 pin adapter, you need to strip, solder, and shrink wrap jumper cable ends to the exposed wires. (You want to achieve the connection types like in the image above). These jumper cable ends will plug into your Arduino board. Check whether you want male or female adapter ends on your jumper cables by looking at your Arduino board.
Step 3: Using the lidar cable that breaks out into jumper adapters, attach the lidar to the Arduino board using the following wiring diagrams.
Step 4: Turn on the robot, wait a few seconds for it to connect to wifi and power the lidar. Make sure the lidar spins.
Step 5: Run the robot_gui.py file. Connect to the robot. Make sure you can see the lidar scan in the gui.
Step 6: Characterize the lidar sensor measurements - e.g. place the robot at a known distance to walls and record lidar data. Establish if there is insignificant bias in the range measurements, and calculate the variance in the range distances. This variance will be helpful when calculating particle weights in your PF.
PF localization requires a map known ahead of time. In this section of work, you will need to create one.
Step 1: Choose a location in the world where you plan to run your navigation experiments.
Step 2: Code up your map. See the parameters.py file for an example of Prof. Clark's office. At the bottom of the file you will see the wall corners list. Each wall has four parameters: corner 1 x, corner 1 y, corner 2 x, and corner 2 y. These xy coordinates are the end points of the walls. You can use a measuring tape to get the lengths of your own walls, and note the corner positions in your defined global coordinate frame.
For this section of work, you will design and implement your PF to estimate states x, y, theta of your robot within your map. Implement your PF into your fork of the lab 3 repo code. There is a file extended_kalman_filter.py where your code will be added.
Step 1: PF PRELIMS -
Particle Definition - We use a State and weight descripton of particles (already done for you. See the Particle class constructor).
Initialization - Each particle can be initialized uniformly randomly within some random range, or normally distributed. You must code these functions up within the Particle class, (i.e. functions randomize_uniformly and randomize_around_initial_state).
Map - You must code the geometry that calculates the predicted distance of a lidar measurement from the robot to a wall segment, noting the direction of the lidar ray. Add your code to the function get_distance_to_wall.
Step 2: PREDICTION - Leveraging the motion model work from Labs 2 and 3, design a Prediction step for your Particle Filter. Specifically decide on the following:
Control input - What is your u_t vector?
Transition Function - What is your g(x_tm1, u_t) function? You will add your code to the propagate_state function of the particle. Use helper functions if you want.
Random_u_t - How much randomness should be added to your your u_t vector before it is propagated in the transition function? You can write code to handle this directly in your propagate_state function, (where it is needed), or write a helper function to do this.
Propagate all particles - In the prediction function of the ParticleFilter class, propagate all particles in the particle set forward.
Step 3: CORRECTION - Leveraging the lidar characterization work above, design a Correction step for your Kalman Filter. Specifically decide on the following:
Measurement - What is your z_t vector?
Weight Calculation Function - What is your weight_calculation function?
Resampling - For a particle set, you want to be able to resample it randomly, generating multiple copies of particles with higher weights, and fewer copies of particles with lower weights. The number of particles should remain constant for this implementation. Code this up in the resample function of the ParticleSet class. Call the resample function appropriately from the ParticleFilter's correction function.
Step 4: STATE ESTIMATE OUTPUT - In the ParticleSet class, calculate the mean state of your particle distribution within the function update_mean_state.
Hint 1: Use the angle_wrap function any time you calculate a new angle.
Hint 2: Search the particle_filter.py file for all occurrences of ### Add student code here ### and be sure you added code there.
Now you can test your PF filter with real data logged with the robot.
Step 1: Design a "simple" trajectory for preliminary testing that makes it easy to debug your filter, and one or more for more complex testing.
Step 2: Drive the "simple" trajectory, being sure to log the robot data as well as some form of truth measurements (think about this, there are several ways to obtain truth measurements - but none of them perfect).
Step 3: Run a script (e.g. bottom function of the particle_filter.py file) that loads and runs your PF over the data file. First, set your PF to run and test only your prediction step of the filter, (think you would modify the code so corrections never happen). Do state estimates look good? Do your particles diverge? Add in your correction step. Does performance improve?
Step 4: Test your code against these conditions:
Unknown start vs known start: Try the PF against one of your data files, but modify the hard coded "known" start pose to be at different poses in the workspace. Be sure your initial particle distribution for the state estimate accommodates the distance between the initial guess and the actual pose of the robot. A good filter will be robust to bad guesses of the initial pose.
Kidnapped robot problem (if you have time)
Step 5: Plot your trajectories to be included in your report. Be sure to plot estimated and true states when possible. Include particles in at least some XY plots. Error plots may also be useful. Be able to desribe your PF performance using these plots in your report.
Now that you are confident your PF is working, lets run it in real time.
Step 1: Start with the robot in view of the camera. The PF state estimate of the robot (and confidence ellipse) should be visible on the central pane of the GUI. Drive the robot around and be sure the state update updates properly on the GUI. Note that that Robot class has a member self.particle_filter which should call your code and get an update call once per robot control cycle.
Step 2: Design a trajectory to document that pushes the limit of your PF, and can be documented in both video and report plots. Drive the trajectory and log state estimates, particle states, truth data, 3rd person video, and screen capture video of your GUI (e.g. with QuickTime).
Step 3: Create a picture-on-picture video that shows both your 3rd person video and the GUI screen captured video. iMovie might work. You will upload this to Brightspace as a deliverable.
Write a formal report, using IEEE format as mentioned on the website lab schedule page. For this lab, be sure to include all plots mentioned above, and more plots if you think they are significant. Assume the reader knows about robotics, but that they are not familiar with our class. Sections should include:
Abstract - The section should provide an overview of the work. There should be a minimum of one sentence that informs the reader about the motivation, method, experiment, and results. Be sure to provide at least two significant quantifiable results from your results that a reader may find interesting.
Introduction - Use at least one paragraph to describe the motivation for the PF. Use a paragraph to provide an overview of each section of the paper. E.g. “The method section will describe mathematical details of the motion model developed, after which the experimental design section will detail how the model was validated.”.
Method - Describe your PF design with mathematical equations. Start with what is given to you - i.e. introduce the robot (with an image) and discuss the inputs to your PF. All design decisions from section 2 should be included. Be sure to define all new variables in text. Equations should not have words, just greek letter variables with numbers and letter subscripts. Number all equations.
Experiment design - Use images, photos, figures to explain the experimental setup and how physical measurements were taken for testing the PF.
Experimental results - Show all your plots here. Discuss assumptions problems, successes, etc. Label all plots. Each plot should be referred to at least once in your text discussions.
Conclusion - Present a high level understanding of the performance of your PF when estimating robot poses. Highlight low and high performance aspects of your robot/PF system as a whole. Quantify performance claims.
References - Cite key references. You may want to do a little research to cite key textbooks, PF implementations from the past.