Complete Code: https://github.com/sidsoc12/fryborg_code
For our navigation strategy, we opted for a fully autonomous state machine approach, meticulously coded to control a four-motor omni-wheel drive system. Our goal was to maximize accuracy and consistency without relying on predefined time delays, so we implemented closed-loop control using a quadrature encoder co-processor and derivative-limited position control through the Derivs_Limiter library. This allowed us to smoothly and precisely control the robot’s movement across the field in both the X and Y directions while keeping track of acceleration and velocity limits. We began by orienting the robot using three VL53L0X ToF sensors connected via an I2C multiplexer, aligning it to a known reference direction. Once the robot was facing “true north”, we reset the encoder values and began navigating the map using position targets and wall collisions to re-zero our robot's location in space.
In our code, we created specific state transitions for each part of the competition. We handled wall collisions by checking both position limiters and encoder velocities, allowing us to stop when the robot made physical contact and encoders registered zero motion. For each navigation step, we used a combination of AccelPosition for smooth deceleration before a stop and WallAccelPosition for driving into known walls. For example, after orienting, we drove down and then left into the walls, letting the robot recalibrate its internal position based on the fact it had hit a known wall. We navigated to the pot, pushed it into the burner, backed off, ignited it with a servo-controlled “igniter”, and then returned to drop the ball into the pot.
Our robot’s behavior was orchestrated using a structured state machine implemented in the runStateMachine() function. Each state represented a discrete step in the robot’s journey—from orientation to ignition to launching and finally delivery and celebration. We transitioned between states based on sensor feedback (such as encoder positions or ToF sensor alignment) or timers, ensuring that actions only occurred once the prior movement or action had completed. The robot would only proceed if conditions like reaching a wall or dropping a ball were satisfied. For example, the transition from PRESS_IGNITER to MOVE_RIGHT_BURNER occurred only after the igniter servo had fully pressed and released, ensuring precise timing and mechanical interaction. The state machine also included logic to stop the robot and trigger celebration if the global timer exceeded the match duration, giving us a clean shutdown behavior.
For encoder-based navigation, we fed coordinate targets into our AccelPosition() and WallAccelPosition() functions. We scaled our destination positions using a 31 ticks/inch conversion factor, allowing for precise unit consistency across all directions. For instance, to drive down to the wall at the beginning, we used the coordinates (0, -10 * 31), while pushing the pot into the burner involved a leftward move to (-77 * 31, 0). After depositing the ball, we backed up to (0, -13 * 31) and moved right again to (82 * 31, -13 * 31) to align with the pantry. Similarly, to move under the burner at the end, we fed coordinates like (5 * 31, 13 * 31) and finalized the push into the customer zone with (83 * 31, 0). These values were chosen to correspond to physical field layout measurements, and we tuned them carefully in hardware testing to account for drift and contact tolerances. Overall, our state machine and coordinate-based motion planning allowed us to build a robust, fully autonomous system that responded to both digital state transitions and real-world physical signals
We also integrated the launcher motor using a feedforward + feedback control strategy with a Hall effect sensor, calculating the rotational period and adjusting PWM values to maintain a target RPM. We used interrupts for accurate timing and designed the launcher activation to align with a global match timer, ensuring that launching occurred only after the cooking sequence was completed. LEDs were used to indicate state changes (e.g., Orange for before match, Yellow for launching, and Purple for celebration), and we included servo-driven mechanisms for ball drop, igniter, and eye blinking to celebrate at the end of the match. By combining real-time sensor feedback, timed servo actions, and a reliable state machine, we ensured that our robot could not only navigate the field autonomously but also perform each step with mechanical precision and visual feedback