The project developed, aims to construct, as a proof-of-concept, an autonomous self-driving robot, controlled exclusively using an OpenMV Cam, and its integrated microcontroller. There has been a large number of similar robots developed during the last few years, mainly using different types of microcontrollers such as Arduinos and Raspberry PI’s. Although it was not possible to fully develop a final product and join all of the components together, we were able to prove that the OpenMV board is capable of accurately and safely control an autonomous robot, navigate obstacles and follow circuits.
Aiming to connect all of the components of the project, there was a need to create a 3D model for the body of the robot. This body must be able to support or microcontroller and camera (OpenMV), as well as a powerbank, which will be our main power source. Lastly, it must also be able to support the servos, wheels and back bearing, in order to control the robot’s movement, and maintain stability. Below, a rendering of the robot’s final body is displayed.
In the upper part of the robot, we have a case modeled according to the size of our version of OpenMV (Cam H7), with the correct slots for easy access to all of the I/O ports, as well as the connection to the servo shield port, necessary to connect to the servos. This box is connected to the vertical pilar using a joint in its back. All of these details can be seen in the following images. This pillar is also an independent piece, which fits the main body through another joint, as we can see below.
It is also possible to see some text printed in the robot. This was done purely out of aesthetic reasons, and could be easily removed if it proved to impact severely the complexity and printing time of the body.
Inside the body’s main section, the powerbank used would be stored (Mi Powerbank 2S), for which we modelled the necessary slits to easily access the I/O components. The two pieces of the main section would be screwed together using 4 screws, one on each corner, for which we can see the holes in the images displayed so far. This solution is not ideal, as it can make the assembly more complicated. However, this is the solution that saves the most material and printing time, while maintaining structural integrity. Lastly, since we can access all of the ports, there is no need to remove the powerbank frequently.
Lasty, in the bottom section of the robot, we have the components responsible for the robot’s movement and stability. In the back, we have a pilar, which will house a spherical bearing, whose only purpose is to guarantee stability and ease the movement of the robot. In the front, we have two supports, one on each side, for each of the servos. These would be screwed to the support pillars, in order to guarantee stability, and screwed to the wheels, using the holes visible in the following image. In the image below, it is also possible to see two grey boxes; these are merely a placeholder, and were designed in order to visualize the location of the servos, these would not be printed.
At this point, the robot is completely modeled, and ready to be printed. The design was created with two main factors in mind: minimize the quantity of material used, and minimize the model’s complexity. 3D printing is still a slow and difficult process, therefore, we tried to develop our model with the aim of simplifying the printing process. All of the components fit together using joints instead of being printed as a single piece in order to facilitate the printing, each component can be printed independently, ensuring that if anything goes wrong during the printing process, the material wasted is minimal.
The robot has three possible states:
stop: the camera enters in an hibernation mode (red LEDs on)
pause: the speed of both wheels is set to zero so it doesn't move (blue LEDs on)
running: the speed on each wheel changes according to the orientation of the line (green LEDs on)
The transition between states is accomplished by the pressing of two buttons: stopButton and pauseContinueButton and is represented in the following diagram where each transition only occurs on a rising edge of a button (RE), that means, if a button was off and is now on.
By having two wheels the way it was found to control the robot's orientation was by changing the speed in each wheel. According to the differential drive concept, if it is desired to turn right, the left wheel speed must be higher than the right wheel speed and the difference between both would be proportional to how fast we want to turn. Once one of the robot's purpose is to follow a line, the robot will try to have the same orientation of the line it sees. In order to achieve this, it is required to have a relation between the line features and the angular speed of the robot and convert that angular speed to different speed values that would be applied to each wheel. The first task is done by designing a controller where its gains would be tuned so it could give a reasonable value of an angular speed given the line features. Regarding the second task, the angular speed is converted by using the following equation:
To follow the path defined by the lines, the robot first needs to recognize said lines. For this purpose, the function get_theta_rho was developed, that through a snapshot taken by the camera, recognizes all lines and returns the most common angle and rho. This function utilizes another function provided by OpenMv, find_line_segments([roi[, merge_distance=0[, max_theta_difference=15]]]), where roi is the region of interest to analyse, merge_distance is the maximum number of pixels two line segments can be separated by each other (at any point on one line) to be merged and max_theta_difference is the maximum theta difference in degrees two line segments that are merge_distance apart to be merged. In this project, a roi of (120, 210, 120, 30), a merge_difference of 30 and a max_theta_difference of 5 was used.
The time of execution of this function is, on average, 12679 µs. This result was obtained through three independent executions of the program.
Besides recognizing lines and following their defined path, the robot will proceed to recognize coins. In order to do so, there is a function named recognize_coin that receives as an argument a snapshot obtained by the OpenMV camera. The function processes the snapshot through an OpenMV integrated function, find_circles([roi[, x_stride=2[, y_stride=1[, threshold=2000[, x_margin=10[, y_margin=10[, r_margin=10[, r_min=2[, r_max[, r_step=2]]]]]]]]]]), which makes use of the Hough Transform concepts to recognize circles in a given snapshot. The following parameters were used: threshold = 2000, x_margin = 10, y_margin = 10, r_margin = 10, r_min = 2, r_max = 100 e r_step = 2.
The function recognize_coins prints out the number of recognized coins in the terminal. The execution time is, approximately, 40201 µs. This result was obtained by calculating the average execution time of ten independent runs.
Regarding the results attained, we were truly satisfied. The microcontroller present in the OpenMV Cam is impressive, and highly optimized for machine vision tasks. We have proven that this piece can be used as an independent and fully-working autonomous robot. It was possible to control the movement with precision, identify identities in the robot’s path, react accordingly, and adjust the robot’s position fast enough, and with ease.
Although it was not possible to combine all of the components developed, and fully test our work, in order to guarantee everything would work as expected, the work developed has proven to be enough to be used as a proof-of-concept. We have showed that it is possible to construct and utilize an autonomous driving robot using the OpenMV technology