Flying and Ground Robot Collaboration for Camera-based Search and Rescue


Search and Rescue (SaR) missions present challenges due to the complexity of the disaster scenarios and the urgency to rescue the victims. Most life losses and injuries occur in developing countries where the budget is scarce. Robotics has become indispensable for rapidly locating disaster victims. Combining flying and ground robots more effectively serves this purpose due to their complementary features in terms of viewpoint. To this end, a financially cost-effective framework to perform typical SaR tasks is presented. The method leverages You Only Look Once (YOLO) and video streams transmitted by an Unmanned Ground Vehicle (UGV) and an Unmanned Aerial Vehicle (UAV). Three sets of experiments were conducted at the Cyber Zoo of the Delft University of Technology. In exploiting pose estimation to perform human depth estimation, the experiments unveiled the susceptibility of the proposed approaches to variations in poses. In tracking moving object trajectories, the collaboration was found to be particularly advantageous in wide-area cluttered trajectories as opposed to narrow-area unobstructed trajectories where the deployment of one robot suffices. In mapping terrain elevation, relative errors dropped significantly with the collaboration of the UGV and the UAV. Moving forward, refining algorithms, enhancing collaborative functionalities, and devising adaptable strategies tailored to diverse SaR scenarios will be pivotal.