So far, you have successfully programmed robots using reactive control strategies. You have also combined these reactive control strategies together using finite state control. Next, you will build on these skills as you continue to explore the field of mobile robotics. In this project you will tackle a fundamental problem in mobile robotics: robot localization.
For this project, you must work with one other student. I will only allow three students per team in the case of an odd number of students in a particular section.
There are three deliverables for this project.
In-class Presentation / Demo
I'd like each team to spend about 5 minutes presenting what they did for this project. Since everyone's doing the same project, there's no need to provide any context as to what the particle filter is or how it works. I'd like each team to talk about anything they did that adds to the overall knowledge of the class. Examples of this would be non-trivial design decisions you made (and why you made them), development processes that worked particularly well, etc. In addition, you should show a demo of your system in action.
This deliverable be assessed in a binary fashion (did you do the above or not).
Your code
Your code should be forked from this repo. Please push your code to your fork in order to hand it in.
Bag Files
You must include a couple of bag files of your code in action. Place the bag files in a subdirectory of your ROS package called "bags". In this folder, create a README file that explains each of the bag files (how they were collected, what you take from the results, etc.).
Writeup
in your ROS package create a file to hold your project writeup. Any format is fine (markdown, word, pdf, etc.). Your writeup should touch on the following topics:
For this project you will be programming your robot to answer a question that many of us have pondered ourselves over the course of our lifetimes: "Where am I?". The answer to this question is of fundamental importance to a number of applications in mobile robotics such as mapping and path planning.
For this project, we will assume access to a map of the robot's environment. This map could either be obtained using a mapping algorithm (such as SLAM), derived from building blueprints, or even hand-drawn. Given that we have a map of our environment, we can pose the question of "Where am I?" probabilistically. That is, at each instant in time, t, we will construct a probability distribution over the robot's pose within the map, denoted as xt, given the robot's sensory inputs, denoted as zt (in this case Lidar measurements), and the motor commands sent to the robot at time t, ut.
The particle filter involves the following key steps
Getting the Starter Code
The starter code will be in a package called my_localizer
inside of the robot_localization_2017
Github repo. Before you can run the starter code, you must install scikit learn. If you already have pip installed (Note: make sure pip is installing stuff for Python 2 and not Python 3).
$ sudo pip install sklearn
You also need to have scipy installed
$ sudo pip install scipy
If you don't have pip yet, run this command first.
$ sudo apt-get install python-pip
If for some reason the installation doesn't work, you can try using apt-get
$ sudo apt-get install python-sklearn
Particle Filter Starter Code Flow Chart Overview
Before diving in, you should get a sense for how all of the different functions in the particle filter connect together. To help with this I have put together a process flow diagram. The diagram shows the three major workflows in the particle filter code.
Here is how to engage with the diagram.
Please provide feedback on this diagram as that would be really useful for me next year.
(click for full sized image)
Testing the Particle Filter
In order to code the particle filter, you will need to create a map of the room you will be testing in. For a testing space I suggest flipping a couple of the tables in AC109 on their side. This will create easily identifiable landmarks (walls) that the robot can use to make a map. Make sure not to do this when another class is starting though!
To make a map, first connect to the robot in the usual way. Next, run:
$ roslaunch neato_2dnav gmapping_demo.launch
This will start the SLAM code. To view the results, you will need to add a map topic visualization in rviz.
Pilot the robot around for a while. Once you are satisfied with your map, save it.
$ rosrun map_server map_saver -f ~/mymap
This will save your map to your home directory as two files (mymap.pgm and mymap.yaml).
You are now done! You should close the gmapping_demo.launch
.
In order to test out your map with the built-in particle filter you can run.
$ roslaunch neato_2dnav amcl_builtin.launch map_file:=/home/pruvolo/mymap.yaml
When you start to implement your own particle filter, you can run it using.
$ roslaunch my_localizer test.launch map_file:=/home/pruvolo/mymap.yaml
For some reason, this doesn't seem to work if you put ~/mymap.yaml
, so I typed the whole path out (tab completion will also properly substitute ~ for your actual home directory). If you still have rviz open you can set an initial guess as to the robot's position using the "2D Pose Estimate Widget".
Next, drive the robot around. If you had a good initial guess, hopefully the robot will be able to localize itself in the environment.
Another way to proceed is by testing out the built-in particle filter in the simulator. I have already included a map generated using gmapping using the simulator. To run the built-in particle filter, follow these steps:
$ roslaunch neato_simulator neato_playground.launch
$ roslaunch neato_2dnav amcl_builtin.launch map_file:=`rospack find neato_2dnav`/maps/playground_smaller.yaml
$ roslaunch turtlebot_rviz_launchers view_navigation.launch
Using the regular scan topic
Right now the robot localization code is set to use stable_scan by default. If you'd like to try something the regular scan topic you can override this behavior. For instance, you can simply add the argument scan_topic:=scan to any of the examples above (it works for both testing the built in particle filter, gmapping, and the launch file provided to test your own particle filter).
For example, to use the built-in particle filter in the example above, you would run:
$ roslaunch neato_2dnav amcl_builtin.launch map_file:=/home/pruvolo/mymap.yaml scan_topic:=scan
Using the static_map Service
Getting the map from the map_server node is a bit tricky. If you read through these ROS tutorials on defining ROS service messages and calling services in Python along with the documentation on the map_server's supported services, you should get the basic idea. Let me know ASAP if you are having trouble with this and I'll add more detailed info.
Initializing the Particles
To initialize the particle filter, we simply sample the particles according to p(x0). Rviz has a widget for selecting a guess as to the starting pose of your robot. The starter code subscribes to the appropriate topic, and you will need to fill in code to take the message you get from rviz and generate a sample of particles.
Update the Particles Using Odometry
This function is where you use delta in order to update the positions of each of your particles.
def update_particles_with_odom(self, msg):
""" Implement a simple version of this (Level 1) or a more complex one
(Level 2) """
new_odom_xy_theta = TransformHelpers.convert_pose_to_xy_and_theta(self.odom_pose.pose)
# compute the change in x,y,theta since our last update
if self.current_odom_xy_theta:
delta = (new_odom_xy_theta[0] - self.current_odom_xy_theta[0],
new_odom_xy_theta[1] - self.current_odom_xy_theta[1],
new_odom_xy_theta[2] - self.current_odom_xy_theta[2])
self.current_odom_xy_theta = new_odom_xy_theta
else:
self.current_odom_xy_theta = new_odom_xy_theta
return
# TODO: modify particles using delta
It is a bit hard to conceptualize what is happening here geometrically. Here is a picture that should help connect each of the variables in the code above with its meaning in terms of the robot's odometry. This picture captures the state of the world at the # TODO
comment. The robot has moved from an initial odometry position shown as the red vector in the bottom of the figure to a new odometry position (shown in blue) at the top of the figure. Note: the second red vector is simply there for convenience to show the rotation delta[2].
The easiest way to characterize what is happening geometrically is to think of the steps the robot would have had to take to move between the old odometry pose and the new odometry pose. We can think of this as happening in three steps:
Here is what each of these quantities look like in our diagram:
Once you have computed each of r1, r2, and d, you can apply them to each of your particles to get their new poses. Remember, each of these steps should be done assuming the robot's pose is given by the x, y, and theta for each particle (not for the odometry). Make sure to add noise in these various steps to give your particles some diversity.
Reweight the particles based on their compatibility with the laser scan
def update_particles_with_laser(self, msg):
""" Updates the particle weights in response to the scan contained in the
msg """
# TODO: implement this
pass
This is where you modify the weight of each particle using the laser scan data. The default method of doing this will be using a likelihood field model (details to be presented in class). Here is the code for the OccupancyField class.
class OccupancyField(object):
""" Stores an occupancy field for an input map. An occupancy field returns the
distance to the closest obstacle for any coordinate in the map
Attributes:
map: the map to localize against (nav_msgs/OccupancyGrid)
closest_occ: the distance for each entry in the OccupancyGrid to the closest obstacle
"""
# some other functions omitted
def get_closest_obstacle_distance(self,x,y):
""" Compute the closest obstacle to the specified (x,y) coordinate in the map.
# Code: omitted (this is already implemented)
The occupancy field is a structure that helps you to compute the likelihood of a laser range measurement given a hypothesized robot pose. To get a feel for how it works, consider the case of only processing the lidar measurement at 0 degrees. For each of your particles the lidar reading at 0 degrees will specify a different point in the 2-d map (since each specifies a unique pose for the robot). Given the 2-d coordinate of the detection you can use the OccupancyField data structure to compute the distance between the detected obstacle point and the closest obstacle from the given map. We can model the likelihood of the distance between the detection and the closest obstacle as coming from a Gaussian distribution. Therefore, the likelihood of a particular measurement is proportional to e-d*d/(2*sigma*sigma) (where sigma is the standard deviation of the Gaussian distribution that specifies how noisy you expect the laser measurements to be).
Once you have incorporated the lidar data immediately in front of the robot you can loop over all data and combine each of the likelihoods together. There are several possible approaches to this. Depending on your assumptions you may multiply the likelihood (be careful of numerical underflow) or average the likelihoods. Talk to me for more guidance on this.
Resample Particles
def resample_particles(self):
""" Resample the particles according to the new particle weights """
# make sure the distribution is normalized
self.normalize_particles()
# TODO: fill out the rest of the implementation
This is where you will be resampling the particles (i.e. "Draw a new sample of particles using the weights, wt,i). There are some helper functions in the starter code that should make this step relatively straightforward (see draw_random_sample and weighted_values).
I have included some bag files in the repository that you can use to work on the project. The bag files included are in the following locations.
my_localizer/bags/ac109_1.bag
my_localizer/bags/ac109_2.bag
Each has an corresponding map file.
my_localizer/maps/ac109_1.yaml
my_localizer/maps/ac109_2.yaml
In order to see what these bags contain, use the rosbag info command (here I am running this from the robot_localization_2017 directory).
$ rosbag info my_localizer/bags/ac109_1.bag
path: my_localizer/bags/ac109_1.bag
version: 2.0
duration: 2:04s (124s)
start: Feb 07 2017 16:32:13.05 (1486503133.05)
end: Feb 07 2017 16:34:17.98 (1486503257.98)
size: 32.7 MB
messages: 19365
compression: none [24/24 chunks]
types: geometry_msgs/PoseStamped [d3812c3cbc69362b77dc0b19b345f8f5]
geometry_msgs/Twist [9f195f881246fdfa2798d1d3eebca84a]
nav_msgs/OccupancyGrid [3381f2d731d4076ec5c71b0759edbe4e]
nav_msgs/Odometry [cd5e73d190d741a2f92e81eda573aca7]
neato_node/Accel [207a3851a50869ae8ce637885057d51b]
neato_node/Bump [459d87767ce0f2ebdc162046e9ad2c13]
rosgraph_msgs/Log [acffd30cd6b6de30f120938c17c593fb]
sensor_msgs/LaserScan [90c7ef2dc6895d81024acba2ac42f369]
sensor_msgs/PointCloud [d8e9c3f5afbdd8a130fd1d2763945fca]
tf2_msgs/TFMessage [94810edda583a504dfda3829e70d7eec]
topics: accel 2504 msgs : neato_node/Accel
bump 2502 msgs : neato_node/Bump
cmd_vel 54 msgs : geometry_msgs/Twist
map_pose 801 msgs : geometry_msgs/PoseStamped
map_pose_continuous 2478 msgs : geometry_msgs/PoseStamped
odom 2499 msgs : nav_msgs/Odometry
projected_stable_scan 895 msgs : sensor_msgs/PointCloud
rosout 23 msgs : rosgraph_msgs/Log
rosout_agg 11 msgs : rosgraph_msgs/Log
scan 895 msgs : sensor_msgs/LaserScan
stable_scan 895 msgs : sensor_msgs/LaserScan
tf 5785 msgs : tf2_msgs/TFMessage
transformed_map 23 msgs : nav_msgs/OccupancyGrid
This bag file has all of the topics you are accustomed to seeing with the addition of two new topics: map_pose
and map_pose_continuous
. These topics each encode the position of the robot in the map frame as determined from the ceiling mounted april tag markers (where the map is defined by the map stored in the maps subdirectory). The two topics are slightly different in that the map_pose
topic is only published when a marker can be seen by the camera, and the map_pose_continuous
topic is updated with the odometry when the robot even when the robot hasn't seen one of the markers. For the purposes of the bag files I have given, you can use map_pose
for your validation. Map_pose_continuous
may come in handy for working with maps where the robot moves out of range of the markers.
In order to test your code with the bag file, you will want to first close any ROS nodes that you have running. Once you have done this, you can set the use_sim_time parameter so that every ROS node you launch from now on knows to use the clock as defined in the ROS bag.
$ rosparam set /use_sim_time true
Next, make sure you set the mesh to properly display the Neato (Note: this will not be needed if you already connected to a Neato since you started roscore).
$ roslaunch neato_node set_urdf.launch
Next, start up your ROS bag, and immediately pause it by hitting the space bar. For example, you could run this command and then tap the space bar immediately after.
$ rosbag play --clock bags/ac109_1.bag
In order to test a particle filter, you will now need to run the appropriate launch file. If you haven't yet implemented your filter, try it with the built-in filter.
$ roslaunch neato_2dnav amcl_builtin.launch map_file:=/home/pruvolo/catkin_ws/src/robot_localization_2017/my_localizer/maps/ac109_1.yaml
Next, start rviz and add the displays for the following topics (use the "By Topic" tab after clicking on the add button).
Also, change the fixed frame to "map".
Now, have the bag start playing again by clicking on the appropriate terminal (where you ran rosbag play
), and tap the space bar. You should see data start showing up in rviz. You should see a red arrow show up corresponding the pose as published on the /map_pose topic. This is the robot's actual location. In order to initialize the particle filter in this location, click on the 2D pose estimate widget in rviz, click at the base of the red arrow, and drag in the direction the red arrow is pointing so that the 2D estimate lines up on top of the map_pose
arrow (it may help you to pause the bag while you do this to make sure the robot doesn't move while you are doing this step).
As the bag plays you will see the robot move around in the map. If all is well, the icon of the robot should line up with the red arrow. The difference between the two tells you the magnitude of the error of the particle filter. This can be used to automatically tune a particle filter.
If you'd like to collect your own bag file with ground truth position information, read the instructions page.