Automation is visible in nearly all spheres of life. Everything is becoming “smart”, from mobile phones to refrigerators and even doorbells. These devices can interact with the environment around them and sometimes make decisions for themselves. Vehicles are no different, and autonomous self-driving automobiles are no longer a subject of science fiction. They have already been implemented in the form of smaller autonomous “bots”, with Starship’s Robot being a pioneering example.
A major objective of such automobiles or bots is to sense their surrounding environment in order to detect obstacles and avoid collision with them. The aim of this project was to emulate this on a smaller scale, by creating a land-based mini bot that can detect and manoeuvre around obstacles along its designated path. The bot is given a certain destination to move towards in the form of coordinates and is equipped with the necessary sensors to detect its surroundings and processes them using computer vision techniques. It dynamically decides where to move in accordance with the input it receives from its sensors and the outputs of the computer vision algorithms without any human interference, making it truly autonomous.
The work done chiefly consists of three parts:
1. Establishing basic navigation methodology
The ARDUINO UNO microcontroller is the control centre for the movement of the bot. It connects to the motor drivers, the ultrasonic sensor, the GPS module, and the digital compass/ magnetometer, and all these components work together to navigate the bot.
Motor Control: 2 L298N H-bridge motor drivers are used to control the motors. The direction of the motors is controlled by enabling power to different pins of the motor driver.
A sense of direction: The QMC5833L Module is a digital compass that returns the azimuth or heading angle of the module with respect to magnetic North. With this knowledge, we can ascertain what direction the bot is facing at all times, and control for this direction.
Location and Destination: The NEO-6M GPS module is used to determine the live location of the bot in terms of coordinates. The final destination that the bot needs to travel to is also specified in terms of coordinates. With the help of the TinyGPS++ ARDUINO library, the angle between these two coordinates can be calculated, and this forms the bearing angle for the bot. The GPS can also be used to calculate the distance between these two coordinates.
Objects vs Obstacles: An HC – SR04 ultrasonic sensor module is used to calculate the distance to any obstruction that lies in front of the bot. A threshold is specified, such that if an object is closer than the threshold limit, it is treated as an obstacle and needs to be processed.
The bot is said to be moving towards its destination if the heading angle (where it is currently facing) and the bearing angle (where it should be facing) are the same. The difference between them is computed and minimised by controlling the motors and rotating the bot left or right until the difference is almost zero.
2. Object Detection Using Computer Vision
A region proposal network used to detect object using an RPI camera fitted on the bot. Due to the current scenario, this has been recreated by taking images from a mobile phone camera.
The region proposal network outputs the anchor boxes which have a probability of an object in it. Then these anchor boxes are passed through a convolutional neural network to extract certain features. These features are passed to a SoftMax classifier and bounding box regressor to generate the probabilities and coordinates of the boxes. The model can detect the same obstacle repeatedly, so the NMS (Non Max Suppression) is performed on the outputs generated, with an IOU parameter. The FRCNN model is trained on the image net dataset, using the concept of transfer learning the model can be trained on the required dataset.
In this project the main aim is to detect the obstacle right in front of the robot. So a searching algorithm can be used to search for the object nearest to the robot from the image. This is done by taking the object detected which has the highest y-2 coordinate in the image dimension.
Once the final object is isolated, the boundaries of the object can be used to determine the number of pixels between the centre of the image and each of the boundaries (left and right). The smaller of the two pixel values is then sent to the Arduino to be processed. This is achieved through serial USB communication between the RPi and Arduino. The Arduino then uses the pixels information to calculate the actual size of the obstacle.
3. Algorithm to manoeuvre
Once the number of pixels to the closer edge of the object is obtained, it is possible to manoeuvre around the obstacle by calculating its actual size. The algorithm is outlined as follows:
Obstacle Width = Distance * Edge Pixels * Sensor Width/ Focal Length * Total Image Pixels
While the actual bot could not be tested to see whether it maneuvers appropriately around the obstacle, the algorithm was emulated on an image to see whether the obstacle size and angles were calculated appropriately. In both cases, the algorithm was able to detect the obstacle appropriately and determine which side to move, with the blue arrow indicating direction of movement (new bearing angle). The angle is given an extra 15-degree leeway as a safety measure.