Our robot ended up working as we had hoped, and quite consistently in the environment we set up for it. The tasks it was able to perform are shown in our state diagram from the implementation page, but here is a summary:
Parkotron patrols in a straight line until it reaches a dead-end, upon which it turns around before running into anything
Successfully detects open spaces on its right (occasionally has false positives, but very rarely misses a space)
Turn right to check for a parking tag in the open space, and use its camera to decide whether to park there or not
Parkotron continues the patrol if the space was invalid
If the space was valid, Parkotron can pull ahead of the parking spot, then parallel park into it
Parkotron was able to complete all these tasks autonomously, only stopping the sequence of actions when it successfully parks
Our finished solution was able to meet all of our design criteria. However, our finished solution accomplished the criteria in different ways than in the original design.
For example, after implementing space detection using an occupancy grid, we realized it was not the best solution. Open spaces were not always able to be easily discretized. The grid spaces rarely match perfectly with the actual parking space, resulting in inconsistent space detection. As a result, we scrapped the plan of using occupancy grids in our final solution, and instead used the LiDAR scanner data directly to search for open space relative to the robot.
The pre-programmed route we originally envisioned was not exactly pre-programmed either. We simply allow the robot to travel straight until it detects an incoming collision, at which point it performs a "u-turn" to patrol in the other direction and look for parking on the other side of the street.
We were successful in using the mounted camera to check for valid parking, via the use of AR tags, it was quite consistent.
The parallel parking was implemented as a pre-set command, where it pulls up to where the next car is and follows the given path backwards into its spot, stopping before it runs into the car behind it.
We overcame several difficulties over the course of completing this project:
Not being able to access the onboard Raspberry Pi camera
Found proper access commands after some research and TA help
Difficulty accessing the location of the Turtlebot in the correct frame
Found correct way to access this information after some research and TA help
False positives in detecting open parking spaces or failing to detect some spaces at all (inconsistent)
Theorized this could be due to parking spaces spanning more than one, or sitting in between, two occupancy grid squares
First attempted to fix this problem by shrinking the occupancy grid square size and adding multiple checks over a time span, but we still failed to reach consistent space detection
Ultimately fixed by switching to using LiDAR scanner data directly to analyze open space relative to the robot
Yaw values ranging from pi to negative pi and rolling over between the two extremes complicated turning controls
Fixed mathematically using an if statement to catch the edge case
Inconsistent collision detection
Fixed by switching to using LiDAR scanner data directly to analyze open space relative to the robot
AR tag frame position inconsistent and fluctuating drastically as car moves
Fixed by using car proximity sensors to the parking sign rather than AR tag frame position data
Our solution uses the tools already equipped on the Turtlebot to great effect. One possible area of improvement would be the parallel parking procedure itself. Currently, the Turtlebot simply follows a pre-programmed route and cannot make adjustments real-time to accommodate for varied situations like smaller spaces and differently shaped surrounding vehicles. Given more time we would definitely have liked to improve the parallel parking mechanism.
We would also like to add a camera to the side of the robot to check for valid parking without having to turn to face the spots. Furthermore, rather than using AR tags we would have liked to use computer vision to recognize actual parking meters, parking signs, and differently colored curbs to better simulate a real-world parking situation. We would have liked to implement a more complex patrolling system, able to take perpendicular streets and, handle more than one lane in a street. Lastly, we would have liked to add integral and derivative control to the turning mechanism. Since it only incorporates proportional control currently, the end of the turn is extremely slow. Adding integral and derivative control can reduce the settling time of the control input and increase accuracy of the turn.