I was extremely fortunate of having the opportunity to work as an Undergraduate Research Assistant at Ontario Tech University in the University's Mechatronic and Robotic Systems Laboratory.
The research project I worked on in collaboration with my Master's Student was,
Mixed Unmanned Ground Vehicle (UGV) and Unmanned Aerial Vehicle (UAV) Systems
Summary of Project:
The overall objective of this research project is for the development of an Autonomous UAV drone to be used to provide motion planning information and sensor data to a multi-robot system of UGVs (Unmanned Ground Vehicles), to assist in their autonomous navigation
My Tasks/Responsibilities:
Our focus was to build a quadcopter from the ground up, including the hardware assembly, electrical wiring, and software, with the ability to;
Communicate between the onboard computer and flight controller
Merge onboard IMU and external vision position data from MOCap
Fly via manual radio control and autonomously
Aruco Marker detection via Intel Realsense Depth camera
BOTTOM PLATE
MIDDLE PLATE
TOP PLATE
ROS- Robot Operating System, is an open-source framework with a set of libraries, packages, and codes that helps in building robot applications. Most robots, as the drone, are made up of actuators (things that move), sensors (things that read the world), and a control system(the brain of the robot), and ROS allows you to easily connect each component of the robot through ROS tools called topics and messages, as well as nodes, and subscribers/publishers.
Version of ROS used for the drone
Operating System: Ubuntu 20.04
ROS Distribution: Noetic
The project started by learning and using software simulations in order to understand the fundamentals of the drone, sending code to and from, as well as learning about the Pixhawk ecosystem, and most importantly MAVlink and MAVros messages. Important software used were;
QGroundControl
Gazebo
Visual Studio Code (code editor)
MAVros is a ROS package that enables MAVlink (Micro Air Vehicle) messages to be sent and received between compatible devices, initializing communication between devices.
QGroundControl (QGC) is a critical software for any drone application, as it provides full flight control over the drone, as well as calibrating, altering important drone parameters, and general drone checks over MAVlink messages.
Gazebo, an open-source 3D robotics simulator, was used in parallel to the actual drone with the PX4-Autopilot package installed, to simulate a quadcopter model into an open-world environment running MAVros. Because uploading and testing code on the actual drone was a lengthy process, being able to upload the code to a simulated environment cut our prototyping time significantly.
Visual Studio was used as our code editor for both C++ and Python, in order to code programs that would allow for autonomous flights in OFFBoard Mode on the drone.
Demonstration of SITL, including Gazebo Simulation and QGroundControl, while running a code to run an autonomous mission developed by us
Although the assembly of the drone is complete, there was still a lot to do before the first flight. Because we were flying indoors, a GPS lock is not possible, so a MOCap External Vision System was used alongside the IMU (flight controller's sensors) data from the onboard flight controller to substitute for the lack of awareness from the drone.
The computation of the External Vision Data and IMU data is done by the EKF algorithm on the flight controller, which makes it available for the onboard computer. The onboard computer outputs the "mavros/local_position/pose" node which is made available to the ROS server, and this node allows for a successful position estimate of the drone. Having a successful position estimate allows the quadcopter to fly in a stable position, readjusting the motors accordingly.
Total of 14 external cameras tracking the 8mm active mocap markers [9] on the Drone
Here is a video with all 3 real-time, marker tracking on MOCap software, Local Position tracking on RVIZ, and a Flight
Procedure before any flight
A procedural routine is done each time before either a manual or autonomous flight and it includes;
Turning on and calibrating the OptiTrack Mocap system (external vision system) :
includes calibrating with wand, setting ground plane, creating a rigid body, and finally broadcasting data over internet
Attach lipo battery to quadcopter to turn on the onboard computer, flight controller, and power to motors :
Here's the list of packages needed to be run by the onboard computer in order to initialize communication between all components;
roslaunch mocap_optitrack mocap_px4.launch
roslaunch mavros px4.launch
Now a check is done to ensure everything is running accordingly;
Launch RVIZ (can be done remotely), and ensure there is a local position topic being published, and it follows the position of the drone as you move it manually
Manual Flight is simply achieved through a handheld radio controller, which communicates to the radio receiver connected to the Pixhawk (see wiring diagram);
Un-flip the kill switch
Arm the drone
Raise throttle to takeoff
Autonomous Flight is conducted in the drone's "OFFBoard" mode, where the vehicle's movement is controlled autonomously. The purpose of the code we developed was to;
Autonomously arm -> enter offboard mode -> take off -> fly to given coordinates around the room ->ensuring it is at those coordinates -> return to home -> land -> and auto disarm
The code is run on a remote pc and the drone runs the code over MAVros
The gazebo simulated environment tremendously cut down code prototyping times, as it mimics real-life drone flight almost identically, letting us run the code in SITL before real life.
Now that the drone has been fully assembled, and autonomous flight has been achieved, adding a camera to the drone was the next step. The purpose of integrating a camera allowed us to autonomously detect an Aruco Marker during a flight, and have the ability to pinpoint its location and land on it.
Integrating the camera required additional packages to be installed;
fiducials -> for aruco/fiducial marker detection
realsense-ros -> for intel realsense devices
vision_msgs -> interfacing with vision pipelines
ddynamic_reconfigure -> allows modification to parameters of ROS node
Once all the packages are successfully built, and the camera is connected via USB to the onboard computer;
roslaunch realsense2_camera rs_camera.launch
roslaunch aruco_detect aruco_detect.launch
These will allow for full use of the D435i Depth Camera as well as the Aruco Fiducial nodes to be published. Launching these alongside the mocap_optitrack and mavros/px4 packages will allow for a synchronized flight.
The purpose of the code we developed here was to;
Autonomously arm -> enter offboard mode -> take off -> do a small survey of the room -> as it detects the Aruco marker, subscribe to its location node -> return home -> update the Aruco Marker's location for the final coordinate -> fly to marker-> land -> auto disarm
Here is the camera's point of view of the flight. As you can see, during the survey of its surroundings, the camera picks up the marker, and this updates the subscriber in our code for the position estimate.
I used Solidworks to design, prototype, and 3D Print an adjustable camera mount that was placed on the bottom of the drone. Because a drone can be subjected to high falls, crashes, and vibration, the mount was designed for strength, to ensure stress does not exceed the maximum allowable yield stress, and designed for rigidity, to ensure any twisting or deflection does not exceed the allowable limits.
In future development of the drone, the Aruco Marker will be placed on an Unmanned Ground Vehicle (UGV), and the purpose will be to launch from and land on the moving UGV. Further development will also include the drone creating a geographical map of its surroundings using the camera, and sending the map to the UGV in order to have the UGV find the best path to travel.