Due to cost, time constraints, and the ongoing pandemic, we did not design or implement any hardware. Instead, we worked with a simulated version of the NuBot soccer robot from [10].
The final goal for our project was to pit two teams of robots against each other. To get there, we set a list of sub-goals that would allow us to progressively implement the required functionality.
1 vs. 0
Retrieve the ball and shoot, on a field containing static and dynamic obstacles.
Tests basic path planning, control, and our initial states.
1 vs. 1
Similar to 1 vs. 0, but with our obstacle avoidance implemented
2 vs. 2 (MVP)
Implemented a a goalie robot with a separate state diagram, with basic passing
Tested path planning more rigorously, as well as, passing algorithms
3 vs. 3
Added dynamic changing state based on the location of robots
A "winger" robot that gets into an open position if it is not the closest robot
For our final implementation, we used the real-time A* algorithm for path planning, which samples a circle of points around the robot and return the point that brings the robot closest to the goal but does not cause the robot to collide with any obstacles. The robot then moves to this point and resamples at its new location.
Overall, this planner is very aggressive and works well for robots trying to fight for a ball. However, there is one edge case in particular that breaks this approach. If the robot collides with and finds itself within the bounds of an obstacle, the path planning has two options. In our path planning implementation, the robot either backs up and move along a trajectory to move it away from the center of the obstacle as quickly as possible or moves in a random direction. In this case, we decided the default behavior to be to move in a random direction. This is due to our only obstacles being other moving robots, so it's unlikely that a robot will find itself stuck for longer than one time step. If the path planner were instead opted for backing up, a robot would never be able to separate an opposing robot from the ball, as it would prioritize obstacle avoidance over obtaining the ball.
The control system for the NuBot consists of two layers. The upper layer handles trajectory tracking while the lower layer handles speed tracking. On the actual hardware, the low-level controller distributes speed commands to each motor, each of which has its own controller that computes the proper amount of current to send to the motor. However, the simulation used allows us to focus on planning and game strategy by ignoring the complexities of the motor dynamics. Instead, it sends the velocity commands directly to the Gazebo plugin while ensuring we obey the kinematic constraints (e.g. maximum velocity) of the robot.
Typically, the problem of trajectory tracking could be solved using model predictive control (MPC) or proportional-integral-derivative (PID) control with appropriate feedforward terms. However, our planner only computes a single target pose during each iteration, so there is no need to introduce these methods in the upper control layer since the lower layer already contains a PD controller for speed. Instead, we pass the target pose and velocity to the low-level controller which first computes the desired speed v as
where d is the distance from the robot to the target and the K 's are the PD gains. Next, the individual velocity components are computed as
where θ_rel is the polar position of the target in the robot frame. The angular velocity is computed in the same manner, without the need to consider multiple directions as done above. The resulting velocity commands are checked for saturation before being sent to the Gazebo plugin.
Overall RQT Graph for 2v2 Game-state
In the event of a goal or the ball moving out of bounds, the /reset_ball_node is subscribed to a topic containing the ball position and resets the field accordingly through publishing to /gazebo/set_model_state.
The NuBot goalie path plans using node '/nubot_goalie', sending commands to the /NuBot_nubot_hwcontroller1 for low level control via the NuBot1/nubotcontrol/actioncmd topic to move the robot. The path planner takes in obstacle positions, ball position, and current self position from the /NuBot1/omnivision/OmniVisionInfo topic and its own ballholding state from /NuBot1/ballisholding/BallIsHolding for decision making.
The NuBot player path plans using node '/NuBot2_brain', sending commands to the /NuBot_nubot_hwcontroller2 for low level control via the NuBot2/nubotcontrol/actioncmd topic to move the robot. The path planner takes in obstacle positions, ball position, and current self position from the /NuBot2/omnivision/OmniVisionInfo topic and its own ballholding state from /NuBot2/ballisholding/BallIsHolding for decision making.
*works for both NuBot2 and NuBot3
These nodes and topics provide the overhead_camera node with raw image data to identify robot and ball positions
The rival goalie path plans using node '/rival_goalie', sending commands to the /rival_nubot_hwcontroller1 for low level control via the rival1/nubotcontrol/actioncmd topic to move the robot. The path planner takes in obstacle positions, ball position, and current self position from the /rival1/omnivision/OmniVisionInfo topic and its own ballholding state from /rival1/ballisholding/BallIsHolding for decision making.
The rival player path plans using node '/rival2_brain', sending commands to the /rival_nubot_hwcontroller2 for low level control via the rival2/nubotcontrol/actioncmd topic to move the robot. The path planner takes in obstacle positions, ball position, and current self position from the /rival2/omnivision/OmniVisionInfo topic and its own ballholding state from /rival2/ballisholding/BallIsHolding for decision making.
*works for both rival2 and rival3
This file launches the 2v2 game state by calling the two_v_two_helper.launch and starts the path planner and hardware controllers for each robot from the goalie.py, player_brain.py, and nubot_hw_controller.cc. The launch file also starts the reset_ball.py which performs field resets in the event of a goal or ball leaving the field.
*3v3 functions in the same way
This file helps launch the 2v2 game state by launching the empty field and spawning in the gazebo models for each robot as well as the soccer ball model.
*3v3 functions in the same way
This file launches the path planning 1v4 demo game state by calling the path_planning_demo_helper.launch and starts the path planner and hardware controllers for each robot from the solo_player_brain.py, obstacle_brain.py and nubot_hw_controller.cc. The launch file also starts the reset_ball.py which performs field resets in the event of a goal or ball leaving the field. The obstacle_brain.py overrides path planning to make the robots move back and forth, as obstacles in the middle of the field.
This file helps launch the path planning 1v4 demo game state by launching the empty field and spawning in the gazebo models for each robot as well as the soccer ball model.