Controls were split between two arduinos (top and bottom) which communicated via 3 shared pins. The bottom arduino handled low level controls for driving, whereas the top read sensor input to execute closed loop control.
The bottom Arduino handled the low-level motor actions for driving. Its job was simple: receive commands and execute motion primitives like forward, backward, left turn, and right turn. Keeping this controller focused on actuation made it easier to tune motor behavior and isolate drivetrain issues.
The top Arduino acted as the main brain of the robot. It read the sensors, decided what phase of the run the robot was in, and sent commands to the bottom Arduino. It also controlled the high-level autonomous logic for aligning to the walls, driving to the hog line, positioning for the shot, shooting, and returning.
We originally started with one Arduino for drive control, but once the robot gained more sensors and functionality, adding a second Arduino became the simplest way to expand without overcomplicating the wiring and code structure. This also matched our development process: drive and dispensing were working first, and autonomy was layered on top afterward.
The robot behavior was implemented as a finite state machine (FSM). Instead of trying to control everything through one continuous loop, the match was broken into a sequence of clearly defined behavioral states. Each state had a single goal and simple transition conditions based on sensor readings or completion of an action. This structure made the robot easier to debug and allowed us to isolate problems to specific phases of the run.
When the robot powered on, it entered an initialization state where sensors stabilized and the robot prepared for autonomous operation. This ensured that the system always started from a known configuration before beginning navigation. It also began a loop of 3 navigation cycles as we were allowed a total of 8 pucks split as 3-3-2 during competition.
The alignment state oriented the robot relative to the arena walls using the ultrasonic sensors. The robot spun in place a few times until back and left sensor readings were in predefined bounds within tolerance. We used the two left sesnors to ensure we were parallel to wall by checking the difference between the two readings.
In this state the robot drove forward toward the hog line while continuously checking its ultrasonic sensors. Small steering corrections were applied to maintain the desired distance from the wall and keep the robot properly oriented for the shot. We incorporated tape sensing to understand if we crossed the midline. This state exists when robot reaches a distance window from the back,
Once near the shooting area, the robot performed a short positioning routine to achieve a repeatable shooting pose. This involved small adjustments in orientation and distance before triggering the shooting mechanism (moving backward, turning, moving forward, turning in opposite direction).
The shooting state activated the conveyor motor for a fixed duration to release the pucks down the ramp.
After shooting, the robot navigated back through the return corridor toward the starting area. This involved a sequence of turns and backward motion while again using ultrasonic feedback to maintain orientation.
Once the robot completed its return (based on back sensor), it entered a stop and reload state where it waited for reloading of pucks or stopped entirely if this was the final run.
The FSM structure made the software much easier to develop and debug. Because each phase of the run had its own state, we could quickly identify which part of the system needed improvement when something went wrong.
Closed-loop navigation using the ultrasonic sensors also worked well. By continuously checking wall distances and correcting steering, the robot was able to compensate for drivetrain drift and maintain a consistent path across the arena.
Ultrasonic sensors were sometimes noisy, especially during turning, which occasionally caused incorrect corrections or unstable behavior. Because of this, the software had to rely on tolerance ranges and repeated adjustments rather than precise measurements.
Another challenge was the robot running into a wall during the drive to hog line or return. We had no good strategy for how to identify the robot being stuck (reliably). We tried several strategies to resolve this issue but they ended up reducing performance overall as the robot thought it was stuck when it wasn't due to noisy ultrasonic readings.