Introduction

Khaled Abu-Jbara1, Ganesh Sundaramorthi1, and Christian Claudel2
2 University of Texas , Austin, USA

Unmanned Aerial Vehicles (UAVs) have become increasingly prevalent in the past decade, enabling monitoring tasks that are too dirty, too dull or too dangerous (DDD) to be undertaken by manned flying vehicles. UAVs have for instance been used in surveillance applications including fire detection [11], event detection [18] or object tracking [26]. They also play an increasing role in military operations [13] for both reconnaissance and strike. While rotorcraft UAVs are most commonly used, fixed wing UAVs have in general a higher autonomy and endurance, owing to their relatively high lift to drag ratio, and are thus preferred for certain types of operations (for example the surveillance of a target, or search and rescue operations). However, takeoff and landings of fixed-wing UAVs are particularly risky [25], particularly since lightweight UAVs are much more sensitive to turbulence than heavier manned aircraft, and thus have much lower time constants. In currently operated UAVs, takeoffs and landings are typically carried out using navigation sensors, such as absolute positioning systems (e.g. GPS), accelerometers, gyrometers and magnetometers.
While the fusion between these sensors greatly improves their accuracy, the residual positional errors are still too large to allow reliable UAV landings on narrow UAV airstrips. Positioning accuracy on practical UAV airstrips could be improved by using enhanced GPS systems (such as differential GPS, or RTK GPS), though such systems require additional equipment (reference stations, phase sensing antennas), which is expensive. The use of high accuracy positioning systems also requires the UAV airstrips to be precisely mapped, though
this positioning information is most of the time unavailable. In contrast, the ever decreasing cost of cameras makes them particularly suitable to this positioning application.

This work presents a novel real-time algorithm for runway detection and tracking applied to Unmanned Aerial Vehicles (UAVs). The algorithm relies on a combination of segmentation based region  competition and minimization of a particular energy function to detect and identify the runway edges from streaming video data. The resulting video-based runway position estimates can be updated using a Kalman Filter (KF) that integrate additional kinematic estimates such as position and attitude angles, derived from video, inertial measurement unit data, or positioning data. This allows a more robust tracking of the runway under turbulence. We illustrate the performance of the proposed lane detection and tracking scheme on various experimental UAV flights conducted by the Saudi Aerospace Research Center (KACST), by the University of Texas, Austin, and on simulated landing videos obtained from a flight simulator. Results show an accurate tracking of the runway edges during the landing phase, under various lighting conditions, even in the presence of roads, taxiways, and other obstacles. This suggests that the positional estimates derived from the video data can significantly improve the guidance of the UAV during takeoff and landing phases.