Initial Objective
Our group wanted to create a system that would be capable of autonomously navigating an arbitrary maze by first mapping it. Next, we wanted to solve the maze with an optimized path from the starting location. Afterwards we wanted the robot to be capable of solving the maze from any arbitrary location within the mapped maze. The next stage of the project would be to do object identification of 3D printed models dispersed within the maze and pinning its location. The final stage was to pick up the identified objects using a custom 1 DOF actuator to move the object and continue navigating the maze or to take the object to a designated location outside the maze clearing the maze of all objects.
Difficulties
We decided to use our own turtle-bot so that we can make unrestricted modifications to acheive our goal. As a consequence of this decision, we invested a lot of time building, rebuilding and making modifications to the hardware but more issues were encountered with the software. It was especially difficult to get the Raspberry Pi camera installed and working. In the end we decided to use the RealSense camera which also had a lot of issues with the cable as we kept running into an error saying not device found. We ended up using the cable used on the lab turtle bots which solved this issue. Also, we would like to say a big thank you to David for helping us with the installation of the RealSense camera.
Another issue was fitting the additional hardware on to the turtle-bot which took a significant amount of time. We chose to use Arduino uno to control the 3 continuous servo motors and as a result we needed a separate power supply for the motors and the Arduino. We ended up using a 9V battery for the Arduino with a barrel jack connector and fixing it to the robot using velcro. For the servo motors, we used a 4 AAA battery holder to provide 6 volts source and secured it to the robot using velcro. To wire up the actuator with the motors, we initially used a bread board but it took too much space so we opted to used a small double sided prototype PCB board and soldered all the wires.
The Raspberry Pi was slow for object detection so we opted to use the Google Coral TPU. This allowed for faster object detection but we did not get to the stage of the project to implement it. The CPU of the raspberry pi also heated up and the system would start to threshold since we did not have a heat sink. This limited the time we could run the robot in one sitting. We did install a fan heatsink towards the end of the project, and the heating issue was solved.
Hacks
Instead of autonomously navigating the maze we chose to use AR tags to move around in the maze. We wanted to show we could still move to specific locations within the maze. This approach would have also allowed us to pick up objects using AR tags instead of object detection taking us closer to our original goal.
Final Results
We were able to implement a system where the robot could move from starting position and follow a series of AR tags. After seeing a certain number of AR tags the robot could move back to its starting postion. From the starting position, we could send the robot to a particular AR tag location. The final hardware had the actuator installed with the proper wiring but without the claw to pick up objects and without the google coral TPU.
Future Improvements
We would like to achieve the initial goal without any hacks. This would include installing the google coral TPU for object detection, and finishing the claw of the actuator. The robot was moving in a very jerky manner so we would like to implement a better control system. We could also look into the weight distribution of the robot for a lower center of gravity to improve the stabilty when moving. Eventually, we would love to move the whole system on a RC car base so we can easily move the robot outside and have it move faster and on rougher surfaces.