AprilTags provide an important anvigation aid, as they provide a way to correct and reset the error accumulated in our Pose Estimator. Unfortunately, they are not a magic bullet, they come with some challenges related to effectively identifying them on the field.
Processing Power
Performing the detection of AprilTags from captured images is a process-intensive operation. While this can be done on the roboRIO, it is going to take away from the ability of the roboRIO to perform other tasks.
This is resolved through the use of co-processors, which essentially means we add additional computers to our robot to off-load vision processing from the roboRIO.
The following PhotonVision documentation provides a list of many different co-processors, and benchmarks their performance for different types of vision applications. While many teams are focused on having the fastest co-processor possible, it is most important that we add any co-processor in the first place.
Photon Vision Performance Matrix
Image Quality
This is influenced by a few factors:
Camera Resolution
Higher resolution images make it possible to detect AprilTags from further away, but they also take more time to process, increasing the latency of the operation.
Motion Blur
A blurry image will be almost impossible to process accurately. For cameras which produce blurry images, it might only be possible to identify AprilTags while the robot is stopped or moving very slowly. In a fast-paced game this can be a serious disadvantage, making it harder to "indidently" pick up April tags while moving around the field.
Rolling Shutter
Rolling shutter is the mechanism by which most digital camera actually capture the image one line at a time. When observing fast moving objects, this distors the image, making detection difficult or impossible. This can be mitigated by using Global Shutter cameras, which capture all pixels in the image simultaneously.
Camera field of view
The more of the field a camera can see, the fewer pixels it will use to capture a single AprilTag. Using a narrower camera lense will allow for tag identifcation at a longer distance, although it will also decrease the likelihood of one or more tags being captured within that camera view. This may eb mitigated by using multiple cameras on the robot, with the accompanying complexity that entails.
Processing Speed
Speed is influenced in several ways as well:
Shrink the data
This can be done by using a lower resolution, but that comes at the cost of decreasing identification distance. The image can also be made smaller by using black and white images, which contain much less data than a color image, simplifying the computations needed.
Use faster co-processors
This is a strong argument for using Photon Vision instead of the native firmware for the Limelight camera. Photon Vision allows our vision mechanisms to be moved to faster co-processors and improved cameras as they become available
Use more co-processors
While it is possible to attach more than one camera to a single Photonvision co-processor, this might limit the frame rate that can be processed from each camera. By adding additional co-processors this load can be spread across many devices, getting more information with less latency.