Successfully developed a Gazebo-based simulation environment on Ubuntu Linux using WSL2, integrated with ArduPilot SITL (Software-In-The-Loop) for multi-UAV systems. The simulation environment was configured with custom world models to simulate real-world dynamics for UAV testing. The integration was facilitated using the catkin_ws build environment to manage ROS packages and communication between the Gazebo simulation and ArduPilot plugin. This enabled concurrent UAV control and inter-drone communication, allowing for comprehensive swarming behaviors and collision avoidance protocols.
The project utilizes the Robot Operating System (ROS) for communication and control, a camera for visual input, and the Gazebo ArduPilot plugin for realistic flight dynamics. I utilized a Convolutional Neural Network (CNN) architecture as part of the YOLO (You Only Look Once) model for real-time image detection. The CNN processes input images by breaking them down into smaller grids, identifying and classifying objects within each grid. The YOLO image recognition interface, with predefined and trained objects, will be employed for real-time object detection, processing captured images to identify and classify objects within the drone’s vicinity. Additionally, a LiDAR plugin is used to determine the position of objects in the environment relative to the drone, represented as a matrix vector. ROS nodes, along with a simple algorithm, are employed to send basic navigation commands to the drone via the bash terminal. This setup continuously updates the drone's flight path to avoid obstacles.