Lead Instructors:
Pratap Tokekar - Advisor
Harnaik Singh Dhami - Graduate Student
Vishnu Dutt Sharma - Graduate Student
Peihong Yu - Graduate Student
AI4ALL Mentors:
Andrew Michael Lambeth
Janielle M Jackson
Student Collaborators:
Adil Kasim
Shivani Nanda
Rahi Dasgupta
Megan Bell
Sahana Sreeram
Emmanuel Key
How can we use AI to enable a drone to autonomously navigate its surroundings? Given an inefficient algorithm that moved the quad-copter, we were tasked with improving its efficiency in exploring its unknown surroundings. While the one team was working on the environment exploration, the other was working hard to improve the object detection algorithm. Given a starter algorithm, the object detection team improved and trained a better version of the model.
Exploring the frontier by applying the object detection program and exploration controls.
Using LiDAR and Computer Vision, a simulated drone was programmed to move around its environment autonomously while avoiding obstacles.
The drone was sent to explore frontier voxels, which map unknown areas
In order to accomplish this task efficiently, the team:
Kept track of visited voxels
Clustered the voxels and selected unique ones to explore
Rotated the drone orientation to minimize blind spots
Using Machine Learning to identify and label objects in static images, then applying the program for object detection in videos.
By creating a model then training it on thousands of images, it is able to detect and label different objects found in the environment:
SSD (Single Shot MultiBox Detection) was the pre-trained model used for object detection.
It was fed images from practice runs of the drones simulated environment.
The model trained over these images and resulted in identification of objects.
The final result was achieved by combining the object detection model and the exploration code. The drone successfully navigated its environment while avoiding obstacles and mapping its environment.