The main goal of our final project was to explore advanced control, localization and planning algorithms in the context of miniature self driving. We designed, built and programmed a 1:10th scale RC car to navigate a Formula 1 race circuit. Our learning goals consisted of gaining a better understanding of both visual SLAM and controls algorithms through programming an autonomous car. An autonomous vehicle requires algorithms to map out a space, then navigate itself within that map. For the visual SLAM aspect, we researched the theory behind bundle adjustment and used a bundle adjustment based SLAM system for localization and mapping. For the controls, we applied a simple Ackermann drive model, then built upon that in order for the car to drive autonomously around specified locations. We also implemented a trapezoidal motion profile in order to have the car run more smoothly and seamlessly. The main purpose of exploring this field is to help us learn the fundamentals used in real self driving vehicles.
Overall, we have three main subsystems in our robot: localization, planning and controls. Firstly, the localization system includes dead wheel odometry with interfaces to read the encoder data, visual odometry with an Intel Realsense/Nvidia Jetson and a model of the car dynamics which we aimed to fuse all together to produce the most accurate odometry readings. The planning subsystem includes forward and backward trapezoidal motion profiles, cubic spline trajectory generation and Reed-Shepp RRT* path planning. Finally, the control subsystem includes both the low level motor/servo control as well as the high level LQR and path follower algorithms. Together we are able to effectively direct the robot to quickly follow a race trajectory:
The primary form of odometry we used was to use two wheel encoders, one horizontal and one vertical. The vertical encoder was to track traditional driving, whereas the horizontal encoder was for drifting, which would be horizontal movement. The wheel encoders were connected to an arduino (for more reliable interrupt signaling), which was then sent via serial to the raspberry pi. The encoder ticks were then broken into horizontal and vertical components based on the current heading of the robot (obtained from an IMU). This gave us the distance traveled between time steps, which was appended to the current position to yield the updated position.
Using the Nvidia Isaac VSLAM library we were able to receive a visual inertial odometry input by publishing the Realsense camera frames as well as the robot’s IMU and subscribing to the visual odometry pose topic. The main challenges in bringing this system online were the mismatched system, ROS and realsense versions. Unfortunately, we were not able to fully integrate this into the final demos.
With our understanding of the system dynamics, we were able to predict the state of our robot based on the current state and integration of the control inputs over time. This yielded unreliable and drift prone results but served as a baseline position tracking setup. We ended up not using this in our final integration.
Motion profiling is a controls algorithm that defines one-dimensional position, speed, and acceleration for every timestep. The main purpose of the algorithm is to help the robot move smoothly between two positions by determining the required acceleration and deceleration that impacts the forces on the system and the torque required from the motor. We also implemented a rotational velocity limit which we integrated by applying a forward and backward pass of the max velocities and interpolated using the max acceleration. This yielded results such as:
In order to follow a set of waypoints, we fitted a cubic spline which maintained a continuous derivative over the entire path. This allowed for us to compute the curvature at all points and use that to figure out our target steering angle. Using this and the previously computed motion profile we are able to assign a goal state for each time point between the start and end time. This is the goal state we are using in the LQR follower to compute control inputs.
To allow our robot to complete sophisticated movement, we explored the use of RRT and RRT* motion planning with Reeds-Shepp drive.
Rapidly-exploring random trees (RRT) is a path planning algorithm used to create a tree of nodes for a robot to travel between in order to locate the shortest path between points A and B. However RRT is not the most optimal way to find the best path to the goal. Nodes are being randomly generated, so while it essentially guarantees it will find a valid path between A and B, the chances of it being the most optimal are slim. RRT* builds upon the RRT framework and allows for an asymptotically optimal algorithm by incurring a higher computational cost. RRT* addresses the issues with RRT by allowing for edge rewiring to minimize the cost to each node. This is computed by finding each node in a specified search radius and checking if a re-connection allows the node to have a shorter distance to the start.
As in a Dubins car, where the car can move forward, move in a left arc, and move in a right arc, a Reeds-Shepp car can use all the movements, in addition to moving in these directions backward. These types of cars are unable to move more abstractly (i.e. sideways, diagonal) due to their physical constraints; the only directions it can move in is based on the angle of the front tires and whether it is in forward.
An interesting aspect of the Reeds-Sheep car is that orientation of the vehicle at the beginning and end of the path can be defined. This creates a set of unique paths, as the robot moves both forward and backward to orient itself in the final position.
This ended up being out of scope for this particular project, but it was a great path to learning and understanding basic motion planning, as well as delving into different types of car models.
In order to abstract the low level system, we calibrated our pwm control inputs to tractable quantities such as radians and meters. We implemented a drive subsystem to provide interfaces such as set_control_input which takes a target velocity and steering angle. More depth about the specifics for this implementation is in our Project Story 1.
Once we have our abstraction layer between the hardware and our software controls, we are able to apply traditional control algorithms which exploit our knowledge of the system dynamics to optimally follow the path. We first linearized the dynamics at our current state and computed the error from our target. Then we used LQR to find the control matrix and finally computed control inputs. We then repeated this over all time steps allowing for our robot to correct for drift and control stochasticity.
Teleoperation:
The teleoperation behavior was split into two ROS Nodes: SendTeleop and ReceiveTeleop.
SendTeleop: The SendTeleop Node is responsible for processing a user’s input and converting it to a drive command (steering angle and forward drive speed). Currently, the input comes from a Logitech gamepad.
ReceiveTeleop: The ReceiveTeleop Node is responsible for receiving the published drive command and setting the drive speed (through the ESC) and the steering angle (through the servo) accordingly.
IMU:
The IMU Node is responsible for publishing IMU data from the navxMXP 9-axis IMU. This was separated into a separate node due to the existence of already built libraries to already extract data through a C++ interface.
Odometry:
The Odometry Node is responsible for processing sensor data and publishing Odometry data to be used by the controller. This was separated into a separate node in order to ensure that the odometry data is continuously being updated as we found errors with the controller computation delaying odometry updates. The Odometry node interfaces with the IMU for heading data and an arduino connected to wheel encoders for positional travel data.
SplineFollowing:
The Spline Following Node is responsible for navigating the robot as fast as possible about a select F1TENTH track. This is done by decomposing the track into a spline and setting the controller to follow points along that spline through a cubic spline trajectory. This Node also depends on the Odometry node for position and orientation data.