This is about how you can use a Google Coral and a Limelight to do AI game piece detection automatically. The examples that will be provided are for our offseason bot 9930 in the 2023 game.
Insert power into the Limelight through a 12V cable.
Plug in the Google Coral through the USB port on the Limelight .
Turn off your WIFI.
Connect the Limelight and your PC with an Ethernet cable.
Adjust your Ethernet Settings (Remember to change them back to default when you are done)
Open Ethernet Settings
Click on "Change adapter options"
Right click on "Ethernet" and click "Properties"
Click on "Internet Protocol Version 4 (TCP/IPv4)" and then click on "Properties"
Select "Use the following IP address:"
"IP address:" should be "10.9.30.2" and "Subnet mask:" should be "255.255.255.0"
Click on "OK" to finish
Hold down the white button until the green lights flash.
Connect to the Limelight through your browser using the URL "http://limelight.local:5801/" or if it has a name, use "http://limelight-CameraName.local:5801/"
The Limelight may take a short amount of time to boot back up.
Plug in the Google Coral through the USB port on the Limelight.
Insert POE adapter into the Ethernet port on the Limelight.
Give the POE power through the Radio/PDH
Plug the ethernet into the Radio/Switch
Do this after robot setup. You only need to complete these set of steps once.
Reconfigure the height, x, and y from the center of the robot and from the floor.
Turn on "Send through JSON to NT". This allows the limelight to send files to the robot.
"Turn on snap to floor".
If you can, turn on auxiliary power on the radio. You can find the dip switch under a sticker on the back. Turn on the switch corresponding to the port your limelight is plugged into.
Pipeline Type (Detector)
This is the type of detection the camera will be performing. Examples of this are Retroreflective (Reflective Tape), April Tags, and what we are using, Detector(Object Detection)
Source Image (Camera
This tells the camera whether to analyze the video feed or a given image.
Resolution (1280x960 40fps)
This allows the camera to use different resolutions and framerates. There is a trade off between the two, "1280x960 40fps" and "640x480 90fps" are examples of this situation. Higher resolution result in more accurate detection while higher fps will make it more responsive. Since robot code operates at 50 times a second (20ms), "1280,960 40fps" is the best option.
LEDs & LED Power (Off & 100)
For Game Piece Detection, you don't need LEDs. It doesn't matter what you set LED Power to because the LEDs aren't on.
Orientation (Upside-Down)
The Limelight can be mounted right side up and upside down. The Limelight allows for you to invert the feed so you don't have to change the code.
Exposure (3300)
This will adjust how bright or dark the camera feed will be before camera detection will apply. It always needs to be as high as possible to allow for the best lighting, as that is how the AI was trained.
These settings are meant for specific pipelines. They all don't apply to Game Piece Detection.
This is what we tested with
Confidence Threshold (0.8/80%)
Confidence is a measure the Google Coral will use to express how accurate they think their detection was. Setting the threshold too high will lead to stuttering, while too low will detect things that aren't what we want.
X-Crop & Y-Crop (-1, 1) (-1, 1)
This allows you to crop out part of the camera feed. This can help with performance and allow for the reduction of random detections.
Class ID Filters
This filter is just a number that will allow you to only detect specific objects. i.e., 0 will detect only cones and 1 will detect only cubes.
TFLite Model File
FTLite Model Files are the training sets for the Google Coral. You can get pretrained datasets from Limelight under the Neural Network section "https://limelightvision.io/pages/downloads".
Labels File:
This is just a ".txt" file that the Google Coral will use to name the things it is detecting.
Both Contour Filtering and Output handle how the camera should deal with the detections. They are not needed to be changed unless we are facing issues or want to do something in particular.
There are four main variables that Limelight provides when it detects an object
tx: The degree amount of how far to the left or right the object is from the center of the screen.
ty: The degree amount of how far up or down the object is from the center of the screen
ta: The percentage of the screen that the detected object takes up.
tl: The latency (ms) of how long the object took to be detected.
LimeLightHelpers is a library that Limelight provides to us that allows to utilize the camera in our code. You can find it here "https://github.com/LimelightVision/limelightlib-wpijava" and you have to put it into the robot folder of our code.
Note: Whenever you input the name of the limelight, use "limelight-<LimeLightName>". For example, we used "limelight-front".
This is the Utility Class that we use to receive and log the information from the camera.
GamePieceDetectioniUtility()
Requires the name of the camera for the code to watch.
get_tx()
Utilizes LimeLightHelpers to find "tx", then logs and returns the value.
get_ty()
Utilizes LimeLightHelpers to find "ty", then logs and returns the value.
get_ta()
Utilizes LimeLightHelpers to find "ta", then logs and returns the value.
This Command uses the Limelight, Trapezoidal Profile, and Odometry to move to the game piece automatically.
LimeLightIntakecommand()
The command needs the Swerve Drive, GamePieceDetectionUtility, and a desired position in the form of a Pose2d.
initialize()
The robot finds the distance between the itselfand the game piece using odometry. Then it uses that information to calculate the Trapezoidal Profile.
execute()
This will use the camera to calculate its strafe value and adding a PID value to it. It will then start using a timer to walk itself through the Trapezoidal Profile to find what it's speed should be. It then logs the information, and then finally makes the robot move.
isFinished()
This will periodically check to see if the robot has reached the end of the Trapezoidal Profile.
This Command in CommandFactoryUtility allows for us to easily utilize the camera in autonomous and as a button for testing.
createAutoIntakeCommand()
This command needs the Swerve Drive, Arm Subsystem, Manipulator Subsystem, Toproller Subsystem, GamePieceDetectionUtility, and the Position of the game piece as a Pose2d.
createIntakeCommand()
This will lower the arm and start the manipulator and top roller motors.
.andThen(createWaitUntilAtAngleCommand())
This will wait for the arm to reach the intake angle before moving on.
.andThen(LimeLightIntakeCommand())
After the arm is in place, the robot will then run the LimeLightIntakeCommand, and it will finish this once it has reached the end of the Trapezoidal Profile.
.andThen(CreateStowArmCommand())
After the intake and movement is done, it will bring the arm back to stow position and stop the rollers from intaking.
return command
This will allow for the robot to return this command for a controller button or autonomous.
We can use a controller to test all of the code manually. This can only be used once before you have to power cycle or deploy code because of the use of an absolute position
m_driverController.leftBumper()
This will watch the left bumper button on the driver controller.
.onTrue(createAutoIntakeCommand())
This will wait until the desired button is pressed. It will run the intake command once, and to run it again you have to let go of the button and repress the button.
The big thing with autonomous is that PathPlanner (our autonomous software) uses the Swerve Drive Subsystem. Subsystems can only be used by one thing at a time, so we can't easily just add camera intake command. What we did is created multiple paths that acts like one whole one, but they are separated by the camera intake command.
Our goal for the camera intake command is to make it as fluid as possible. The goal is for the path to end at a high speed and the command will start with that same high speed. Once the command ends (once it intakes the game piece), the next path will take over. We tell the path to lower the intake before the command starts and to stow after it runs as a failsafe, and to ensure the the robot doesn't have any potential issues.
PathPlannerCommand()
The robot will score as a pre command and then run the first path. As a post command, it will run the camera intake command. Post commands are the only time a command can use the Swerve Drive.
.andThen(PathPlannerCommand())
After we use our camera intake command, we go back into running the second path. After that is done, we will run our second camera intake command as a post command again.
.andThen(PathPlannerCommand())
Finally, we will run our 3rd and last path with no post command.