Since we have a 3-wheel holonomic drive, we have to be slightly more careful in how we drive our robot. The first step is using each wheel's direction to compute the contribution of each wheel to the overall body velocity. We use the following equation to get a mapping between the body velocity Vb and the wheel's linear velocity Vw. Here θ is the angle of the desired body velocity with respect to the x-axis, ϕ is the angle of the wheel with respect to the x-axis, and Ψ is the current angle of the body with respect to the x-axis.
If we let Vx = Vb * cos(θ) and Vy = Vb * sin(θ), then we can get a linear system of equations mapping linear x velocity, linear y velocity, and angular velocity (Vx, Vy, W) to the 3 wheel's linear velocities Vw1, Vw2, Vw3. The matrix mapping is given by the following code:
const float BODY_TO_WHEEL[3][3] = {
{-.5, -.866, BASELINE},
{-.5, .866, BASELINE},
{1, 0, BASELINE},
};
Thus, this matrix converts a desired body velocity to corresponding wheel velocities.
Once we have desired wheel velocities, we use a PD controller for each wheel to achieve the setpoint speed. Each PD controller maps the desired wheel velocity into an output PWM command to send to the motor drivers. We get the current wheel velocity by calculating a discrete derivative of the wheel's encoder counts. We found that this derivative was very noisy so we then applied an exponential smoothing average (EMA) filter to get smoother velocities that created better tracking by the controller. We tuned the PD controller first on a bench to get the values close to reasonable, then fine tuned the controllers on the actual competition field. We found that the PD gains were very tricky to get perfect as we had no way of getting the controller error or response curve when untethered from the robot.
Sample plot of using an LQR controller to reach a desired pose (x_goal, y_goal, yaw_goal) from a starting pose (0, 0, 0) with noise injected into system dynamics.
We initially also wanted to add a position controller using LQR, but this requires integrating the calculated body velocities and there was too much error in this step to make it work. However we show our simulation results on the left.
Our state equation was:
x_dot = A x + Bu
where x = [x_pos, y_pos, yaw], u = [Vw1, Vw2, Vw3].
We base our values for A and B off derivation 1 and derivation 2.
We used the yaw estimate from the IMU to implement a heading controller. This yaw angle is the output of a fused estimate combining the accelerometer and gyroscope which is computed directly by the IMU chip. We then implement a PI controller with limiting saturation to track a desired heading. Since our robot is started in a random orientation, we store the yaw of the robot once we hit the corner and have all desired headings be relative to this yaw. This approach worked well in testing but we found that sometimes the IMU would not send updated values to the Teensy, which we hypothesize is due to a bad solder connection between the jumper cable and the Teensy board.
We additionally had a PD velocity controller for our shooter motor which mapped target RPM to a PWM command for the motor driver. This controller allows the shooter to maintain its velocity while the balls are shooting through it, and thus makes the shots consistently land in the bucket. We again take the discrete derivative of the shooter encoder's counts to get a velocity, but we didn't have to filter it as the counts per revolution of the shooter motor were much less than the drive motors.
Since we need to reach a known position to shoot, we first localize ourselves in the grid. We use the corner of the "Studio" as our (0, 0, 0) pose.
1. Rotate in place to find voltage, V_strongest, corresponding to the strongest signal from the IR beacon. Then, rotate again until we record a voltage that epsilon away from V_strongest.
2. Drive "backwards" for T seconds to one edge of the top chassis (which is designed to be a square) is aligned with the wall.
3. Drive "left" until we're jammed against the corner of the playing field.
4. Reached (0, 0); Reset IMU and start driving to shooting pose
To reach the shooting pose, we apply hand-coded body velocity commands for a hand-coded time interval. To shoot, we turn on the shooter motor and set it to track a hand-coded RPM. Then, we open up the intake hatch to let the balls through. To save time, we start the shooter running while we are driving to our desired pose.
After shooting, we close the hatch for the balls. Then, we follow a hand-coded path to return to the corner of the "Studio". After pausing for 3 seconds to get refilled, the robot returns to the shooting pose.
We have an on-off-on switch connected to our Teensy which we can use to toggle the robot between off, good press, and bad press. The robot reads the switch's value whenever it returns back home to the studio for reloading and thus by flipping the switch we can have the robot go to either good or bad press in the middle of a match.
We also have a thumb potentiometer that we can rotate to adjust the RPM of the shooter motor by +/- 100 RPM. This feature is useful as we can adjust the RPM of the shooter on the fly in the match, especially as the scale (which we shoot balls into) goes up and down and thus a different RPM may be needed.