Pose Estimation is the mechanism by which we determine our robots position and orientation on the field. This can provide some advantages to our robot, such as:
Enables more complex operations during the Autonomous period
Aid driver through automation of repetitive or complex tasks during teleop, or where visibility may be limited
Enabled Field Oriented Drive, although this requires only heading information and not a full pose esimation
Dead-Reckoning
Dead reckoning is an important part of pose estimation. This estimates the current position based on adjustments to a previously known position. It is easiest to imagine this when thinking of GPS navigation in a car. With GPS navigation, the drivers knows where they are. When they enter a tunnel or other location without good GPS reception, their known position no longer updates.
In the GPS example, dead reckoning would use the accelerometers in the phone or GPS unit to estimate how the vehicle has moved since the last GPS update. This will allow it to continue to show movement even though the GPS signal has been lost. This improves the usefulness of the GPS navigation tool, but the estimates position quickly diverges from the actual position as time without a GPS update increases.
In dead reckoning systems, it is important to have a correction mechanism, which can at least occasionally reset the error to near-zero, reducing the time over which errors accumulate.
Where does dead-reckoning error come from?
Mathematically, it is possible to perfectly compute a velocity or position from acceleration. In calculus, this is performed by taking the single or double integral, respectively.
In practice, this does not work well, because we do not actually have a perfect picture of the robot's acceleration. We have an approximation of the robots acceleration, based on periodic sampling of the data. Any loss of data in this acceleration signal will be amplified when it is used to calculate the robot's velocity or position.
The accelerometer input can still be useful for imprecise event detections, such as collision with objects, but isn't adequate for fine control.
Dead reckoning using encoders
It is very possible to determine the position of a robot by using encoders in the drive drain to determine how the wheels have moved against the carpet. This provides a much more accurate estimate of the robot's position, but will still drift over time.
Some drivetrains are better suited to encoder-based dead reckoning than others:
Differential Drive (commonly 6 wheel)
With no wheel slip, this can be highly accurate. As long as the center wheels are slightly lower than the other wheels, the robot will follow a highly predictable path.
Mecanum
Wheel encoders aren't very useful here. They make work fine when driving straight forward and back, but any other direction likely involves a large amount of wheel sleep and unpredictability.
Omni
We've never used this on our team, but this probably involves a large amount of wheel slip in every direction, so wheel encoders would not be very useful.
Swerve
Swerve drive can maintain good traction of the field, but due to four motors driving independently is is likely not as predictable as differential drive. Any error in the calibration of each swerve module is going to to propagate through inaccuracy of encoder information.
Sources of Error
Accelerometers
Sampling error
The computer sees a smoothed version of what would is actually a very noise accelerometer chart. This results in imperfect calculations of direction and velocity, which compound when used to calculate position. It is also worth pointing out that in a Calculus class you tend to work with ideal formulas which perfectly describe a plot, but computers are working with point by point measurements, and not perfect mathematical curves.
Wheel encoders
Wheel slippage
The movement never happened, causing the actual travelled distance to be less than measured. When driving in anything other than straight lines, this impacts the calculated direction as well, which can lead to wild inaccuracies over time.
Drivetrain calibration errors
On swerve drivetrains, each wheel has to be calibrated to know which encoder position places the wheel in-line with the robot chassis. If this is done imprecisely, the robot is going to behave differently than the mathematical model predicts.
Build issues
In the 2023 season we forgot to glue some of the magnets into the swerve drive steering shafts. This causes our encoder readins to drift over time--or sometimes abruptly--when the magnet slipped inside the shaft. This led to a faster-than normal accumulation of error impacting the driveability of our robot.
Drivetrain backlash
Each time a motor changes direction, there is slack in the drivetrain which has to be taken up before the wheel actually begins moving in the new direction. For this reason it is always best to place encoders on the mechanism they are measuring, rather than using the built-in encoders in the motor.
On our swerve drivetrains, the turn motor uses an external CANCoder that is tied to the wheel direction directly, so it should be highly accurate. The drive motor, on the other hand, is measured at the motor. This causes it to accumulate backlash from several gearsets in the drivetrain.
Pose Estimation
Pose estimation combines multiple inputs to make a best guess at the position and orientation on the field. This will combine information from the Gyro/IMU and encoders from the drive train.
Additional data sources can be added to provide more inputs to the estimation algorithm.
In an FRC environment, the additional corrections used are most likely going to come from a computer vision system.
Determine robot pose through computer vision
By observing known landmarks on the field, it is possible to triangulate the position of the robot on the field. Two main methods exist for this:
"Traditional" landmark recognition
Historically retroreflective tape and targeted light sources were used to spot vision markers on the field. These are no longer used as of the 2024 season, but it is still possible to create vision algorithms which spot other landmarks on the field. This could be anything from the driver station status lights, to colorful field elements. Care has to be taken in this approach to make sure you actually know which field element you are looking at.
If you can positively identify two elements on the field, then it is possible for you to mathematically calculate your position on the field, and supply that correction to the pose estimator.
In practice, this is probably quite difficult without the retroreflective targets that were used in past games.
AprilTags
FRC has now switched to AprilTags as vision markers on the field. The location and orientation of each AprilTag on the field is known ahead of time, allowing the pose estimator to estimate a position and heading from these.
An AprilTag is a two-dimensional pattern, similar to a simplified QR code.
With an infinite camera resolution and processing power, it is possible to calculate he exact distance and angle of an AprilTag relative to the robot, This can then be used to predict the position on the field.
In practice, our cameras are not infinite resolution, and as the accuracy of the position information degrades quickly with distance. The practical distance for identifying an AprilTag seems to be in the 5-10 feet range. This inaccuracy can be mitigated by trying to find ways to see multiple AprilTags at the same time. This can allow their predictions to be merged into something hopefully resembling an accurate position.