The hardware setup for Parkotron was fairly straightforward. We utilized a Turtlebot3 to represent a car, as well as its onboard LiDAR sensor and Raspberry Pi camera. The parking environment was constructed using cardboard walls representing curbs or occupied spaces and other Turtlebots representing other cars. AR tags were used to represent parking signs.
The bulk of our code implementation is written in the "parking" package, which contains the Controller class (ControllerV6.py) that is run by the parking.py executable node file. Outside of these files, we also adapted the "ar_track_alvar" package from Lab 4 to use the Turtlebot's onboard Raspberry Pi camera instead of the USB webcam as used in the lab. This node was set up using the "parking.launch" launch file.
Similar to the code structure in Lab 8, parking.py creates a Controller, initializes it, and (differently from lab 8) runs the controller method of the class object. In the initialization step, subscriber callbacks to the AR pose marker topic "/ar_pose_marker" and the laserscan measurements topic "/scan" are set up. The initialization will return False if either of these steps fail, causing the node to crash safely. Additionally, the TF buffer and listener are saved as class variables.
The controller method implements a state machine that controls the movement of the Turtlebot by publishing Twists to the /cmd_vel topic of the Turtlebot. The state machine diagram is shown below.
The Turtlebot begins operation in state 1, where it is given a constant forward motion command. LiDAR range measurements in front of, to the right of, and behind the robot are constantly updated and saved via the /scan topic callback. If an obstacle has been detected in front of the robot for several consecutive iterations, this is registered as a dead end, and the Turtlebot will make a U-turn (state 2) and continue patroling in state 1. Conversely, if no obstacle has been detected to the right of the robot for several consecutive iterations, this is registered as a potential parking space, and the robot will rotate to the right (state 3) to use its forward facing Raspberry Pi camera to detect if the space is valid (state 5).
The camera is set up to recognize a single AR tag (marker #16) as valid. If the AR tag detected is different or no tag is detected, the robot will rotate left (state 4) to face forward again and keep patrolling. Otherwise, the robot will rotate left and begin its parallel parking procedure (state 6). This procedure consists of pulling forward up to the car in front, then backing into the spot by first turning to the right, then going straight, then turning to the left to straighten out. Although the Turtlebot is capable of point turns, we decided to program the parking procedure to mimic the behavior of a real car, which parallel parks according to the procedure described above. Once it is straightened in the parking space, the Turtlebot will back up (state 7) until it is sufficiently close to the car behind it, and stop the node once it has done so. Thus, parking is complete.
The U-turn and the 90 degree left and right turns were implemented using proportional control. The pose of the robot was obtained as a transformation between the odom frame and the base_link frame, and the error between the desired yaw and the current yaw was multiplied by a K constant to derive the angular speed command at each time step.
We made two key changes to the parking algorithm since the project demo: the detection method and the parking procedure.
We initially implemented obstacle and gap detection by adapting the occupancy grid from Lab 8. The Controller class subscribed to the occupancy grid topic, /vis/map, and used the log-odds ratio of the cell in front of and to the right of the Turtlebot as measures of whether those areas had an obstacle or were free. However, despite our best efforts to optimize the occupancy grid by making the grid size smaller or checking over multiple time steps, we found that detection with the occupancy grid was still inconsistent, and changed detection to directly use the LiDAR data. More discussion of the occupancy grid is included within the Results and Conclusion.
Secondly, since the Turtlebot is capable of point turns, it is capable of "parallel parking" by heading forward into a parking spot and then turning on its own axis to align itself parallel to the street. Our initial parking procedure consisted of exactly this, where upon detecting a valid spot with the Raspberry Pi camera, the Turtlebot would move forward into the spot until it was within a certain distance to the "curb" (cardboard wall with AR tag) and rotate left. After the project demo, we changed this behavior so that the autonomous parking algorithm is more applicable to real cars, and implemented the parallel parking procedure as described previously in the state machine section.