The project started with a large amount of team ideation in order to develop the final product, and it would not have been possible without them. The entire team worked together to outline how the task would be addressed and collectively developed the project steps.
We live in a world full of inaccessible and hazardous environments where drones are necessary for safe navigation. We proposed this computer vision project to assist rescuers in hazardous environments locate individuals relative to their position.
Research and development
Intel Realsense D345 depth camera
2 tripods
GTX 1080Ti
Overall, the costs of the project proved minimal for the task being addressed, as the goal was to improve the accessibility of the project.
Capture the invisible: Use a depth enabled camera to capture subjects that are not visible to the viewer.
Subject recognition: Recognize the subject in the image and extract key points.
3D Key Point Mapping: Using the disparity image get the 3D location of the key points.
Coordinate Transformation: Transform the key points from the depth camera to the coordinate frame of the viewer.
2D Visualization: Transform the 3D key points to the 2D pixel coordinates of the viewer’s camera.
Step 1: Capturing Depth Information using the Intel Realsense D345 depth camera
Step 2: Extract Key Features using the MoveNet pose detection model.
Step 3: Mapping Subject Location to a Depth
Step 4: Transform the depth information to the viewer
Step 5: Visualize the points from the viewer's camera by using an iPhone's extrinsic and intrinsic parameters to translate these 3D points into a 2D display for the viewer
The application was able to demonstrate a proof of concept of a method for transforming location information such that a viewer could locate individuals outside of their direct field of view. The depth camera was able to provide adequate information when the target was within the correct field of view, which provided accurate information to the viewer. Future work would require identifying the limitations of this field of view and improving the point mapping to remove the points when the target is not within the field of view. Provided a more advanced depth camera, the accuracy of the points could improve greatly. Overall, the project demonstrated that target identification using drones could be easily implemented using minimal architectures and equipment.