Single-camera-based 3D Drone Trajectory Reconstruction for Surveillance Systems
Seo-Bin Hwang and Yeong-Jun Cho
Chonnam National University
IEEE ACCESS 2025
ABSTRACT
Drones have been widely utilized in various fields, but the number of drones being used illegally and for hazardous purposes has increased recently. To prevent those illegal drones, in this work, we propose a novel framework for reconstructing 3D trajectories of drones using a single camera. By leveraging calibrated cameras, we exploit the relationship between 2D and 3D spaces. We automatically track the drones in 2D images using the drone tracker and estimate their 2D rotations. By combining the estimated 2D drone positions with their actual length information and camera parameters, we geometrically infer the 3D trajectories of the drones. To address the lack of public drone datasets, we also create synthetic 2D and 3D drone datasets. The experimental results show that the proposed methods accurately reconstruct drone trajectories in 3D space, and demonstrate the potential of our framework for single camera-based surveillance systems.
OVERVIEW
1. Drone Detection and Tracking
Uses a real-time object detector (YOLOv5) combined with a tracker (ByteTrack) to extract 2D bounding boxes of drones over time. Each detection is represented as a tuple: (x, y, w, h, c), where c is the drone class. A new 2D drone image dataset 2Drone(on+aug) was created to improve detection performance.
2. 2D Drone Rotation Estimation via PCA (Figure 4 and 12)
Because drones are rigid and symmetric, PCA is applied to the segmented foreground of the drone (using U2-Net) to estimate its main axis of rotation in 2D. The first principal component (vₗ) corresponds to the direction of maximal variance, approximating the drone’s physical orientation. This estimated axis is critical for reconstructing the drone’s 3D pose.
3. 3D Trajectory Reconstruction Using Geometry (Figure 5)
Assumes that the estimated 2D axis (p₁, p₂) corresponds to the known physical length of the drone (from a spec database). Uses camera intrinsics/extrinsics (K, R, t) and triangulation principles to infer the 3D distance and position of the drone (via the angle θ between rays from camera center to p₁ and p₂). Formula used:
where D is the camera-to-drone distance and l is the real-world drone width.
🔄 Post-processing
A moving average filter is applied to the trajectory to reduce noise (Figure 11). PCA-based rotation estimation is shown to be fast and stable (0.0005s per frame, Table 3), outperforming homography or max-distance heuristics.
RESULTS
🖇️ Citation
@article{hwang20233d,
title={3D Trajectory Reconstruction of Drones using a Single Camera},
author={Hwang, Seobin and Kim, Hanyoung and Heo, Chaeyeon and Na, Youkyoung and Cho, Yeongjun},
journal={arXiv preprint arXiv:2309.02801}, year={2023} }