So far, you have successfully programmed robots using reactive control strategies. You have also combined these reactive control strategies together using finite state control. Next, you will build on these skills as you continue to explore the field of mobile robotics. In this project you will tackle some more advanced robotics problems such as:
This list is not at all exhaustive, if there is a particular problem that you want to investigate, you are free to chart your own course.
Learning Objectives
Teaming
For this project, you must work with at least one other student. There is a strict maximum of three members per team. You are free to work across the two sections if you'd like.
Deliverables
There are three deliverables for this project.
In-class Presentation / Demo: I'd like each team to spend about 10 minutes presenting what they did for this project. You can structure the presentation in whatever manner you'd like. You should try to meet these goals however:
This presentation / demo should be very informal. This presentation will be assessed in a purely binary fashion (basically did you do the things above)
Your code: you should turn in your code as a ROS package. Make sure to let me know which Github repo you are using for your project. Only one member of the team needs to turn their code in.
Writeup: in your ROS package create a file to hold your project writeup. Any format is fine (markdown, word, pdf, etc.). Your writeup should touch on the following topics:
Two Paths
There are two ways to complete this project:
Choose your Own Adventure
Here are some possible project ideas, however, pretty much anything in the mobile robotics field is okay. I can provide more details on any of these if desired.
Considerations When Choosing an Open-Ended Project Topic:
You might want to wait to use the camera. The next project (after this one) will focus heavily on computer vision. You will probably be much more prepared to utilize the camera effectively for that project. If you choose to use the camera for this project you will need to be motivated to pick up some of the necessary computer vision largely independently (of course I will help you as much as possible).
Have a well-defined and realistic MVP. My advisor used to say that the real world crystallizes the hard problems. That is to say, the messiness of dealing with the unpredictability of the physical world can make seemingly easy problems quite difficult. If you choose an open-ended project, it is important for you to have a well-defined and realistically achievable MVP. Clearly you might not have a great sense of what would be a realistic MVP at this point, but the warmup project should have provided some hints. I'm also happy to discuss this issue with you during class.
Don't overestimate the Neato. As you know from the warmup project, the Neato is not perfect. For instance the Lidar is subject to noise and missing measurements. As a result of this, you should not view the various built-in navigation modules to the Neato as black boxes that will work flawlessly. For instance, the Neato SLAM works well under certain conditions, but it fails under others. If your project relies on the Neato's SLAM working perfectly all the time (for instance you might be using the map output from SLAM for your project), you are likely to become frustrated. That being said, you shouldn't underestimate the Neato (that is it works quite well under some conditions). Part of this project will be exploring these conditions and finding workarounds when the Neato is having trouble.
Make sure your project topic is aligned with your learning goals. There is a bit of a tradeoff between choosing a project topic around studying an algorithm versus solving a task. If you start from the point of view of exploring a particular algorithm on the Neato (for instance the particle filter), then you can be pretty confident that you will leave the project knowing a lot about that algorithm. However, if you structure your project around achieving some task (for instance, having the Neato deliver a burrito to your friend who is in class and hungry), there is much less certainty about what you will learn from the project. In my experience, choosing a task-oriented project will often organically lead to a deep dive into an algorithm, however, it is often difficult to predict ahead of time which algorithm will require deep investigation / tuning and which can largely be used as a black box. This is not to say that either the algorithm-oriented or task-oriented project is better, it is just a tradeoff that you should be mindful of.
Scaffolded Introduction
For this project you will be programming your robot to answer a question that many of us have pondered ourselves over the course of our lifetimes: "Where am I?". The answer to this question is of fundamental importance to a number of applications in mobile robotics such as mapping and path planning.
The project is scaffolded in a number of ways. Firstly, I have provided you with starter code. Secondly, I have implemented this project myself so I will be able to troubleshoot any issues with your project relatively easily. Thirdly, I have a suggested step-by-step process to complete the project (see the end of this document for more details).
For this project, we will assume access to a map of the robot's environment. This map could either be obtained using a mapping algorithm (such as SLAM), derived from building blueprints, or even hand-drawn (if you are a better artist than myself). Given that we have a map of our environment we will pose the question of "Where am I?" probabilistically. That is, at each instant in time, t, we will construct a probability distribution over the robot's pose within the map, denoted as xt, given the robot's sensory inputs, denoted as zt (in this case lidar measurements), and the motor commands sent to the robot at time t, ut. After applying a "mild" assumption, Bayes' rule gives us the recipe for creating an update rule for the probability distribution we are after:
Additionally, we typically assume that p(x0) is given (i.e. it is based on our prior expectations about the location and orientation of the robot). Unfortunately, for all but a few special cases we cannot compute the formula above in a timely manner (the computational bottleneck in the equation is the integral over xt-1).
A particle filter is an approximate technique for representing (and then updating) the distribution we care about (i.e. the one in the equation above). The key idea is to approximate the distribution p(xt | z1 ... zt, u1 ... ut) using a finite collection of particles that represent hypotheses about the robot's pose in the space:
where ht,i is the ith particle at time t (i.e. a hypothesis about the robot's location and orientation) and wt,i is the weight associated with that hypothesis (i.e. it encodes the probability associated with that particular hypothesis).
We can show (see for example "Probabilistic Robotics" page 103-104) that the approximation above becomes exact as n goes to infinity provided we construct the particle set at time t using the following procedure:
To initialize the particle filter, we simply sample the particles according to p(x0).
To get started you should pull from the upstream comprobo2014 repository using the typical procedure. In a directory called my_pf
you will find the starter code. There are two files there called pf_level1.py
and pf_level2.py
. These files are similar except pf_level2.py
has less of the implementation filled in.
Before, you get started on our own implementation, make sure to read through the comments in the starter code. I will highlight a few of the key sections of code and how they connect to the procedure for updating the particle filter defined above.
def scan_received(self, msg):
# Code omitted for brevity
This is the basic skeleton of the particle filter, I have taken care of implementing this for you, you may modify the above but the idea is to focus on implementing the missing parts of the code.
def update_particles_with_odom(self, msg):
""" Implement a simple version of this (Level 1) or a more complex one
(Level 2) """
new_odom_xy_theta = TransformHelpers.convert_pose_to_xy_and_theta(self.odom_pose.pose)
# compute the change in x,y,theta since our last update
if self.current_odom_xy_theta:
old_odom_xy_theta = self.current_odom_xy_theta
delta = (new_odom_xy_theta[0] - self.current_odom_xy_theta[0],
new_odom_xy_theta[1] - self.current_odom_xy_theta[1],
new_odom_xy_theta[2] - self.current_odom_xy_theta[2])
self.current_odom_xy_theta = new_odom_xy_theta
else:
self.current_odom_xy_theta = new_odom_xy_theta
return
# TODO: modify particles using delta
# For added difficulty: Implement sample_motion_odometry (Prob Rob p 136)
This is where you implement the step: Compute x' by sampling x' ~ p(x' | x, ut). The idea is to use the change in the odometry frame between two particle filter updates to sample a new hypothesis of the robot's pose. There are some straightforward methods for implementing this, and there are some more difficult ones (see comments above for suggestions).
A few hints on updating using odometry:
It is a bit hard to conceptualize what is happening here geometrically. Make sure to draw lots of pictures. A naive approach is to just add the components of delta to the appropriate components of the particles in the particle filter. This will not give you what you want. To see this, try to implement this rule and see what happens to the particles if you move the robot forward. If this approach were correct (which it isn't) you should see each individual particle translating along its heading in the same amount as the robot model is moving . This will not occur if you use this naive (and wrong) implementation.
The easiest way to correctly characterize what is happening geometrically is to think of the robot moving between the old odometry pose and the new odometry pose in a series of three steps:
The second step is easy to determine (it is simply given by math.sqrt(delta[0]**2 + delta[1]**2). r1 can be determined by looking at the angle specified by the translation (delta[0], delta[1]). r2 can be determined by looking at the difference between how much the robot thinks it has rotated in total (delta[2]) and how much it rotated in step 1 (r1).
Once you have spelled out each of these three steps, you can apply them to each of your particles to get their new poses. Make sure to add noise in these various steps to give your particles some diversity.
def update_particles_with_laser(self, msg):
""" Updates the particle weights in response to the scan contained in the
msg """
# TODO: implement this
pass
This is where you modify the weight of each particle using the laser scan data. The default method of doing this will be using a likelihood field model (more detail to come).
class OccupancyField:
""" Stores an occupancy field for an input map. An occupancy field returns the
distance to the closest obstacle for any coordinate in the map
Attributes:
map: the map to localize against (nav_msgs/OccupancyGrid)
closest_occ: the distance for each entry in the OccupancyGrid to the closest obstacle
"""
def __init__(self, map):
self.map = map # save this for later
# TODO: implement this (level 2)
def get_closest_obstacle_distance(self,x,y):
""" Compute the closest obstacle to the specified (x,y) coordinate in the map.
If the (x,y) coordinate is out of the map boundaries, nan will be
returned. """
# TODO: implement this
The occupancy field is a structure that helps you to compute the likelihood of a laser range measurement given a hypothesized robot pose. You can either implement the class above, or get the whole implementation from me if you choose level 1.
def resample_particles(self):
""" Resample the particles according to the new particle weights """
# make sure the distribution is normalized
self.normalize_particles()
# TODO: fill out the rest of the implementation
This is where you will be resampling the particles (i.e. "Draw a new sample of particles using the weights, wt,i).
Hints:
A view of the finish line:
You can test out the builtin particle filter (plus path planning) on the Neato simulator by executing these commands:
$ roslaunch neato_simulator neato_tb_playground.launch
$ roslaunch neato_2dnav amcl_builtin.launch map_file:=`rospack find
neato_2dnav`/maps/playground_smaller.yaml
$ roslaunch turtlebot_rviz_launchers view_navigation.launch
A Possible Step-by-step Process for completing the Particle Filter
Familiarize yourself with the starter code:
A useful exercise would be to go through the starter code and create a diagram listing each of the classes. Within each class, try to list out all of the functions (I would avoid worrying about the static functions such as the TransformHelpers). Now that you have each of the functions written down, draw links between the functions that call each other. If a function is called by ROS (i.e. it is a callback function for a ROS msg), then you should indicate that in your diagram. Some important things to get clarity on:
Run the starter code:
Now that you have examined the code, it is time to run it. When you first run the code, it won't do much. The easiest way to invoke the code is to use the simulator as your platform. The following two commands will startup the simulator, load a map file of the playground world, and startup your particle filter:
$ roslaunch neato_simulator neato_tb_playground.launch
$ roslaunch my_pf test_my_pf.launch map_file:=`rospack find neato_2dnav`/maps/playground_smaller.yaml use_sim_time:=true
Note: the use_sim_time:=true is necessary because you are using the simulator. If you want to run on the actual robot, remove that part of the launch line.
There won't be much happening at this point. The one thing you can start to investigate is the particle_cloud topic that your particle filter code should be publishing. Right now the cloud will be empty.
$ rostopic echo /particle_cloud
---
header:
seq: 12
stamp:
secs: 238
nsecs: 250000000
frame_id: map
poses: []
---
header:
seq: 13
stamp:
secs: 238
nsecs: 460000000
frame_id: map
poses: []
Adding your First Particle:
Now that you can run the node, you can start to work on initializing the particle filter. To start let's add the line of code marked with a ### shown below:
def initialize_particle_cloud(self, xy_theta=None):
""" Initialize the particle cloud.
Arguments
xy_theta: a triple consisting of the mean x, y, and theta (yaw) to initialize the
particle cloud around. If this input is omitted, the odometry will be used """
if xy_theta == None:
xy_theta = TransformHelpers.convert_pose_to_xy_and_theta(self.odom_pose.pose)
self.particle_cloud = []
### ADD YOUR FIRST PARTICLE (at the origin)
self.particle_cloud.append(Particle(0,0,0))
# TODO create particles
self.normalize_particles()
self.update_robot_pose()
If you then launch your modified code using the steps outlined above. You can fire up rviz to see the particle you just added:
$ roslaunch turtlebot_rviz_launchers view_navigation.launch
If you zoom in on the robot model you should see a single green arrow. Most of the arrow will be covered up by the robot model, so you might want to temporarily disable the robot model view so you can better see the particle.
The next step is to create a whole cloud of particles centered at x, y, (with a bit of noise so you get some diversity).
Incorporating odometry data:
Next, I recommend incorporating the odometry data by fleshing this function:
def update_particles_with_odom(self, msg):
""" Implement a simple version of this (Level 1) or a more complex one (Level 2) """
new_odom_xy_theta = TransformHelpers.convert_pose_to_xy_and_theta(self.odom_pose.pose)
# compute the change in x,y,theta since our last update
if self.current_odom_xy_theta:
old_odom_xy_theta = self.current_odom_xy_theta
delta = (new_odom_xy_theta[0] - self.current_odom_xy_theta[0], new_odom_xy_theta[1] - self.current_odom_xy_theta[1], new_odom_xy_theta[2] - self.current_odom_xy_theta[2])
self.current_odom_xy_theta = new_odom_xy_theta
else:
self.current_odom_xy_theta = new_odom_xy_theta
return
# TODO: modify particles using delta
# For added difficulty: Implement sample_motion_odometry (Prob Rob p 136)
The idea will be to use the variable delta to modify the particles. Remember that delta will encode a change in pose relative each of the hypotheses. So, make sure to account for the orientation in each hypothesis when modifying the particles using delta_x and delta_y. Also, make sure to add a bit of noise to increase the diversity of your particle set. (see notes earlier in this writeup for more information)
Normalize the particle distribution: fill in the code in the function normalize_particles
Loading the Map:
The launch file for testing your particle filter will also run a node called map_server
. The map_server
provides a service called static_map
. If you call the service from your code you will have a nav_msgs/OccupancyGrid
message that you can pass to the init method of OccupancyField
. Make sure to read the ROS tutorials on calling services to make this work.
Once you load the map make sure to uncomment the code that initializes the OccupancyField given the map.
Incorporating Lidar Data:
The heart of your particle filter will computing the likelihood of a particular range scan given a particular pose. There are a few ways to do this, but the suggested method is to use something called a likelihood field. To get a feel for how it works, consider the case of only processing the lidar measurement at 0 degrees. For each of your particles the lidar reading at 0 degrees will specify a different point in the 2-d map (since each specifies a unique pose for the robot). Given the 2-d coordinate of the detection you can use the OccupancyField data structure to compute the distance between the detected obstacle point and the closest obstacle from the given map. We can model the likelihood of the distance between the detection and the closest obstacle as coming from a Gaussian distribution. Therefore, the likelihood of a particular measurement is proportional to e-d*d/(2*sigma*sigma) (where sigma is the variance of the Gaussian distribution that basically specifies how noisy you expect the laser measurements to be).
Once you have incorporated the lidar data immediately in front of the robot you can loop over all data and combine each of the likelihoods together. There are several possible approaches to this. Depending on your assumptions you may multiply the likelihood (be careful of numerical underflow) or average the likelihoods. Talk to me for more guidance on this.
Update Robot Pose:
Compute the robot pose by either taking a weighted average of the poses (with the weights given by the particle weights) or by taking the most likely pose (the maximum weight particle).
Resample Particles:
The resample step can be done fairly straightforwardly using the given helper function weighted_values
.
Possible Extensions (not of the due date, but to the particle filter)