Lab 9

Lab 9: Particle filter on Cozmo (optional)

To keep the workload manageable and let you focus on understanding and implementing a localization algorithm, this week's required lab (Lab 8) does not include the real Cozmo robot. However, we do not want to deprive you from the job of seeing your algorithm work on a real physical robot. Hence, this optional lab involves transferring what you did in the previous lab onto the real robot to localize within its arena using a Particle Filter.

1. Arena setup: This lab will involve a specific arena with markers for the robot to localize in. You can request one of these from the TA or you can make one yourself. If you would like to create your own you will need to first print localization markers to the arena. The file marker.pdf provides copies of the markers we will use. You will need six markers, so print out two single-sided copies of marker.pdf and cut out six full markers, leaving a small white boarder around the black square. Carefully tape the markers onto your arena such that the corner of the marker highlighted in red below (Figure 1) is the appropriate distance from the designated corner. Your completed arena should look similar to Figure 2.

Figure 1: Marker configuration

Figure 2: Physical arena

2. Marker detection: We have provided marker detection code for your use, located in the ar_marker/ directory. We do not expect you to edit this code, but if you choose to do so please submit your modified version. To test the code, run $python3 test_marker_detection.py with Cozmo on. Place the Cozmo about 20 cm from a marker, you should see the marker become highlighted in the image window if it is recognized (Figure 3). Spend a few minutes experimenting with this capability to gain an understanding of the distances and orientations at which the robot is able to detect the marker.

Note that the ID displayed for the marker may change, particularly when the robot does not see the marker very well. This is not an issue since the code relies only on the pose of the marker, and ignores the ID.

Figure 3: Marker

3. Localization: The main component of the lab is to use the particle filter to enable the robot to (a) determine where it is, and (b) use this information to go to a predefined goal location on the map. We have provided a starter code in go_to_goal.py which contains the following functions:

  • image_processing(): waits for a camera image, runs marker detection on the image
  • cvt_2Dmarker_measurements(): marker processing helper function, calculates the position and orientation of detected markers
  • compute_odometry(): calculates the robot’s odometry offset since the last measurement
  • run(): the main processing loop, enter your code here

The main processing loop should:

  • Obtain odometry information
  • Obtain list of currently seen markers and their poses
  • Update the particle filter using the above information
  • Update the particle filter GUI for debugging
  • Determine the robot’s actions based on the current state of the localization system. For example, you may want the robot to actively look around if the localization has not converged (i.e. global localization problem), and drive to the goal if localization has converged (i.e. position tracking problem).
  • Have the robot drive to the goal. Note that the goal is defined in terms of both position and orientation. Once there, have the robot play a happy animation, then stand still.
  • Make your code robust to the “kidnapped robot problem” by resetting your localization if the robot is picked up. This should trigger both when the robot is on its way to the goal and once it has reached it (i.e. picking up the robot from the goal and placing it somewhere else should immediately cause it to try to localize itself and search for the goal again).

You may find it useful to leverage some of your earlier implementations of finite state machines and navigation. The coordinate frame for the arena is as shown in Figure 4.

To test your implementation, start place the robot to a start location from which one or more markers are visible. The robot should be able to explore, localize and navigate to the goal within 90 seconds. In the next run, try running the robot for about 20 seconds and then "kidnap" it and place it to a new location. The robot should be able to recover and navigate back to the goal. Given the probabilistic nature of the particle filter, not every run will lead to success. Hence the robustness of your implementation should be verified across multiple runs. Note that the marker detection code is sensitive to lighting, so you may provide additional lighting, such as with a flashlight, if necessary.

Figure 4: Coordinate frames