Our physical design is already in the form of a Turtlebot, so our design criteria is all on the software level. We wanted our software to meet the following functional demands for directing our robot:
Detect open parking spaces with an occupancy grid
Patrol a pre-programmed route
Check validity of space with computer vision and mounted camera
Parallel park into appropriate spot
In order to represent a car, we utilized a TurtleBot3 burger model which is a ROS standard platform robot equipped with 360 degree distance sensor and a Raspberry Pi camera. We chose to build our algorithm on a Turtlebot so that we can take advantage of its compability with ROS and its built-in sensors such as the LiDAR scanner and the Raspberry Pi camera. On a real car, parallel parking presents an example of a non-holonomic planning problem that is conveniently avoided on a Turtlebot due to its ability to point turn. While we traded off having direct real-world model for innate sensor integration on the Turtlebot, we were able to reintegrate authentic vehicle behavior through software design by implementing a backing-in parallel parking algorithm similar to a car would move or park.
For our choice of parking spaces, we built an emulation setting with cardboard and other TurtleBots representing other parked cars, possible dead ends, and curbs or walls. Another design choice we made was to represent valid parking signs with AR tags. AR tags are a robust and consistent form of image recognition, but do not exist in real-life parking scenarios. For real life application of our design, computer vision would be used to allow the robot to recognize parking signs, parking meters and other parking specific metrics such as curb colors and disabled spots.