Design

Our approach to the project involved the following:

    • Accurately estimate the turtlebot's location
    • Find obstacles near the turtlebot
    • Plan a path around the obstacle
    • Follow given waypoints

With the eventual goal of scaling up this project to avoid people moving near the turtlebot, we made two key design tradeoffs:

    • We used rectangular objects (boxes) instead of people in order to better test the abilities and limitations of the LIDAR sensors
    • We used static objects (non-moving boxes) instead of moving ones in order to better test the optimization algorithm, and because of limitations with the LIDAR data

These tradeoffs allowed us to focus on creating a solid code framework, especially to figure out how to handle the detected objects to ensure smooth trajectories, and to work on creating the debugging tools including the Rviz setup and real-time obstacle and path visualization.

LIDAR-based obstacle detection

We use a rotating LIDAR (Light Detection And Ranging) sensor to get a 2D view around our robot. Our goal was to detect obstacles not based off of a map, but rather based off of data from just the most recent frames. In addition, we decided to constrain our obstacle types to rectangular obstacles of a known minimum size. Our general approach was to detect one side of an obstacle, then assume that the side was one side of a square obstacle (hence, we would overestimate the side of most rectangular obstacles, unless we approach it head-on from the "skinny" edge of the rectangle). While overestimating isn't ideal, we decided to keep this in order to decrease the complexity of the code.

To detect obstacles, we discretize and rasterize the polar LIDAR data, then detect edges in the resulting image. After detecting edges, we perform a Hough transform to detect lines of a length we specify. At this point, we could stop but we'd end up with too many obstacles for the path planning to run in realtime. To improve efficiency, we had to reduce the number of lines (and therefore the number of obstacles we work with). The trick here is that one obstacle might end up with 3 to 5 lines specifying its border instead of just one. We only actually need one of those many lines. In order to combat this problem, we eliminate lines that are "close" or "similar" to each other to keep only the largest, most prominent detected sides of the obstacles.

Localization and sensor fusion

Since the measurements we get from different on-board sensors aren't perfect, the estimated location using these noisy data could end up having a big difference from the actual one over time. To obtain a better estimation of its location at all time, we used an UKF (Unscented Kalman Filter) and an indoor Ultrasonic GPS system, which consists of four stationary beacons, one mobile beacon, as well as a modem antenna, to estimate an more accurate location of our turtlebot.

To fully use these beacons, we placed the four static beacons among each corner of the area where the turtle is moving. Since the turtlebot heavily relies on the use of its single channel LIDAR, the mobile beacon has to be placed under the top layer of the turtlebot to avoid blocking the LIDAR. However, we realized that the signal can't reach it isn't well since the antenna is partially blocked. After several tests, we decided to mount the static beacons on the ceiling to establish a stable communication between the static beacons and the mobile beacon, which is connected and mounted on the bot.

Since we fuse all the data from the turtlebot's odometer, IMU (Inertial measurement unit) and the GPS, the data type of these sensors must be supported by the UKF node. In our case, we transformed the data directly in the node that publishes the location of the static beacon in the world frame, into the posewithcovariance data type. Since the manufacturer didn't provide covariance data on their website, we enable the differential mode so the UKF and decide how reliable this data source is. At last, the UKF fuses all the data and publishes filtered position data which proved to be more accurate.

Above image shows how the localization looks like in the client software of the GPS beacons

Path planning

The optimal path was found by solving a quadratic programming problem that penalized changes in the path from the old path, changes in velocity, and changes in acceleration, and that formed constraints around the obstacle with a certain preset margin. The MPC design was developed in the MSC lab [1], with more details explained below.

Constraints

The original starting location of the turtlebot (0, 0) and the goal destination ((6, 0) in the plots below) are used to create the original path.

At a given time step, the current position (x0, y0) of the turtlebot and current target (xT, yT) are found by going +3 along the direction of the original path, and removing any offset. The points are linearly spaced to create the direct path.

Each of the points on the direct path is checked against each obstacles find the side of the polygon it is closest to, and then required to be a certain margin away from it (corner points are constrained against the corner). As shown below, the blue points (closest to the left edge) should be on the left side of the blue line, and similarly the green points above the green line, and pink points to the right of the pink line. The constraints are softened slightly in order to ensure a valid solution is found, hence the points being slightly on the closer side of the lines.

Each successive iteration changes less, as can be seen in the larger plot below. For a single obstacle, the paths generally took about 3-4 iterations to converge to the optimal solution, but we iterated 6 times to be safe and to handle multiple obstacles.

TOP: direct path from starting to target point, with closest edge of the obstacle and corresponding margin constraint marked
BOTTOM: result of optimizing with those constraints
TOP: result from first iteration from left, with new sides marked
BOTTOM: result of a second iteration
TOP: result from second iteration, sides marked again
BOTTOM: result from a third iteration, showing decreasing changes with each round of optimization
LEFT: paths generated by successive iterations, at a given time step

Costs

The cost function penalizes the following:

  • Deviations away from the original path (in this case, non-zero y values)
  • Changes in velocity between time steps
  • Changes in acceleration between time steps

These three costs are weighted in order to form the quadprog matrices.

Path following

The path following algorithm turns the published waypoints (x, y coordinates published by the path planning node) into command velocity commands with a linear x and angular z velocity. Since the turtlebot has only two powered wheels, we used the kinematic model of a differential drive robot to do this conversion, using the equations found in this paper [2].

TOP: kinematic equations for the differential drive model
TOP: the linear and angular velocities were found by trying to reduce both the deviation from the desired path (s) and the deviation from desired angle (beta)
TOP: equation to find angular velocity from linear velocity. linearly velocity was found as a function of beta (to slow down during turns)
TOP LEFT: difference between open loop planned paths at each time step (red) and the followed path (black), found by running the fake_turtlebot node
BOTTOM LEFT: turtlebot following the path in Rviz. The config files in fake_turtlebot plotted the obstacle and paths for easier debugging

Citations

[1] Leu, J., Liu, C., and Tomizuka, M. “M-Analysis for non-convex motion planning using MPC framework.” in submission.

[2] Jadlovský, J. and Kopčík, M. "Distributed Control System for Mobile Robots With Differential Drive." Retrieved from http://kyb.fei.tuke.sk/laboratoria/ludia/pdf/Kopcik_Jadlovsky_K&I_2015.pdf