The final project for my Embedded Systems class was a mobile defense robot. The robot was built from a LEGO Mindstorms kit and an electronic Nerf turret. The robot was controlled by a Raspberry Pi and a BrickPi shield to interface with the LEGO components. The robot was comprised of 4 ultrasonic sensors, 2 light sensors, 2 motors, an IMU sensor, and a Raspberry Pi camera.
The goal of the project was to create a mobile defense robot that would avoid obstacles and stay within a white border. Once the robot drove over a magnet it would stop driving and enter targeting mode. The Raspberry Pi camera was used with the two motors to control the movement, targeting and firing the nerf turret. After firing all four nerf missiles, the robot would begin driving while using the camera to steer the robot into the reload station. After the robot was reloaded, it would repeat the process again.
In order to accomplish these goals the program controlling the robot has to use multiprocessing. Multiprocessing allows the robot to execute multiple functions at the same time by dividing the work between the separate cores of the Raspberry Pi’s quad-core CPU. Programs using multiprocessing don't have access to shared memory, which essentially eliminates the use of global variables. The structure of the program that I used was formed around C-Type objects and a corresponding lock that could be passed to each process as parameters. Each process used a copy of the objects for decision making. If a process changed any of the object’s values, the object would be locked before updating the values and unlocked afterwards.
The first goal was to find a magnet. This was accomplished by creating separate processes for driving, reading the IMU, updating the LEGO components, and reading the ultrasonic sensors. The driving process used three driving modes based on the ultrasonic sensors and the light sensors. The first driving mode was entered if there was more than 50 cm of room in front of the robot. It was controlled by a PD controller using the cross track error associated with the distance from the wall on the right side of the robot and the target distance of 30 cm.
The second driving mode was entered if there was less than 50 cm and more than 10 cm in front of the robot. It was controlled by the large radius turning kinematic equations of an object with two independently driven wheels. The turning radius and direction were determined by the distance to the closest obstacle and which side of the robot it is located on.
The third driving mode was entered if an obstacle was closer than 10 cm. It was controlled by either turning in place by having the two motors going the same speed in opposite directions or by having both motors going in reverse. The light sensors were used by the driving process to change the values of the ultrasonic sensors if the robot was driving over a white boundary line. If the robot encountered any white lines it would react by following the rules of the third driving mode.
Once a magnet was encountered the robot would stop moving and the driving process and IMU process would be joined and terminated, then the robot would enter targeting mode.
Targeting mode was accomplished by creating separate processes for video processing, enemy targeting, turret motor control for theta, turret motor control for phi, and turret motor control for firing. In order to detect enemy targets I used OpenCV to create a training set using photos from the internet and photos of the enemy target. The training was done by using the OpenCV Haar-Cascade classifier with the training set of positives and negatives. The result of the training was an XML file, which was used to evaluate the live video to detect enemy objects. If an enemy object was found then it would update targeting object with the size and location of the enemy. The size and location of the enemy were used by the turret processes to center the enemy in the video. The robot would fire a nerf missile when the enemy was centered in the video. After the robot fired four nerf missiles, it would center the turret and camera and terminate the turret processes and enter reloading mode.
Reloading mode was accomplished by using separate processes for video processing, driving, updating the LEGO components, and reading the ultrasonic sensors. This state used the driving rules of the first state but with the goal of getting to the reload station. In order to detect the reload station another XML was created using the same process mentioned in the previous state, but using photos of the reload station. When the reload station was detected the robot would change the values of the ultrasonic sensors to turn the robot toward the reload station. This would center the reload station in the video. The robot would continue to drive toward the reload station until the size of the reload station was too large to fit in the video feed. After the turret was reloaded with nerf missiles and the front ultrasonic sensor was covered, the robot would return to the first state.
This was about a month and a half long project and I was able to get the robot to accomplish all of the goals. This included autonomously driving while avoiding all obstacles and staying within the boundaries, accurately targeting and firing at the enemy objects, and correctly identifying and driving to the reload station. The biggest thing I learned from this project was the difference between multiprocessing and multithreading. My first attempt at this project was done using multithreading which only uses one core with a round robin scheduling system to manage the separate threads, and it allows the use of global variables. This worked for the first state but was not fast enough to process the video and control the robot. Realizing this failure required a complete restructuring and rewriting of the entire program. After switching to multiprocessing, the main challenges were more focused around testing the robot in the real world and correcting the code accordingly.