Visit quetzal-drone.com for flight test video and more development info.
CAD Design of the Drone:
Jetson Nano:
The NVIDIA Jetson Orin Nano serves as the onboard computer responsible for real-time visual target detection. The device runs a Linux operating system. The YOLO object detection model has been deployed and tested with live camera input. The object detection pipeline is implemented in Python and deployed as a TensorRT-optimized model. NVIDIA’s TensorRT is an ecosystem or software development kit (SDK) designed to optimize trained deep learning machine learning models for fast, efficient inference on NVIDIA GPUs. It adds layer fusion and kernel tuning to reduce latency and maximize throughput, which optimizes the model. During operation, the Jetson initializes the CSI camera to capture live video at 1920x1080. Then each frame is processed through the YOLO detection model and output via bounding boxes and classification labels displayed in real time. The core logic can be seen bellow:
In order to verify real-time performance, a latency validation test was performed using the CSI camera connected to the Jetson. A stopwatch was filmed using the CSI camera with the output displayed on a monitor. A photo was taken to capture both the physical stopwatch and the one that was displayed on the monitor. The image can be seen in bellow:
Based on the result of the experiment, there is around a 0.41-second latency. This latency includes the delay of the camera capture, frame buffering, inference processing, box annotation, and display output. Since the Jetson subsystem is not responsible for flight stabilization, this latency is acceptable for mission-level visual classifications. The test reveals that the detection workflow is suitable for integrated flight testing.
Next project