For coding, we utilize Android Studio and Java. We are happy to train new members of our team on how to use GitHub so that we can share and collaborate on code. Some significant things we did in our coding include:
Training an Open CV model to detect our team prop, this model recognizes colors to help direct our robot.
Initially, we used a Tensor Flow model, but after running into problems surrounding an inability to detect the team prop consistently, we switched to OpenCV.
We have both Autonomous and TeleOp programs
We have Autonomous programs on all four corners of the field that can score 45 points per run.
Additionally, we use sensors and presets in our programs that allow us to work more efficiently.
Sensors: we have a camera and magnetic sensor. Our camera utilizes our OpenCV model in auto. Our magnetic sensor senses the rotation of the linear slide. Every time it senses the linear slide, it resets that value to 0, which allows our presets to be accurate and consistent
Presets: We have 3 main presets that we use, mapped to three button on our 2nd gamepad. We have an intake preset, movement preset, and a scoring preset. Each have specific values that are integral to our gameplay.
If you would like to view our coding (autonomous) in action, watch the first video below!! ↓
If you would like to view our anti-tipping code (made by Kabir for the control award!) in action, watch the second video down below!↓
Autonomous coding!
Anti-tipping code!