Rapid Damage Mapping Challenge
In disaster scenarios, drones can quickly survey hard-to-reach areas and provide aerial views of damage, aiding first responders in situational assessment amprius.com. This challenge focuses on using drone imagery to identify and map damage (e.g. collapsed buildings, flooded regions) right after a natural disaster. Rapid damage assessment is crucial for prioritizing rescue and relief efforts and can save lives by directing resources where they are needed most.
Damage Detection – Develop an algorithm to analyze drone images (or video frames) and detect signs of damage (e.g. collapsed structures, debris, fires or floods). This could involve image classification (damaged vs. undamaged areas) or object detection (bounding boxes around damaged buildings).
Annotation & Mapping – Mark or annotate the identified damaged areas on the imagery. Participants might create heatmaps or overlay markers on the images indicating high-damage zones.
Summary Report – Derive simple metrics from the analysis, such as the number of damaged buildings detected or percentage of area affected, and output a brief summary that responders could use.
(Optional) Change Comparison – If pre-disaster images are available, perform a before/after comparison to highlight new damage (change detection).
RescueNet (Hurricane Michael) – A public UAV imagery dataset with 4,494 post-disaster high-resolution images of buildings and landscapes after Hurricane Michael kaggle.com. These images are labeled for damage assessment and can be used to train or validate your models.
(Participants can also use any similar open aerial disaster imagery, such as NOAA’s post-hurricane aerial photos or the xView2 building damage dataset, but a curated set like RescueNet is recommended for speed.)
Accuracy of Detection – How well does the solution identify actual damaged areas? (High true-positive rate for damaged sites, with few false alarms). If ground-truth labels are provided, this can be measured by precision/recall or an F1-score.
Timeliness & Efficiency – Since speed is critical in disasters, solutions that process images quickly or in real-time (e.g. processing a live drone video feed frame-by-frame) will be rated higher. Efficient algorithms that could run on edge devices get bonus points.
Robustness – The approach should handle varied conditions (different lighting, angles, or debris types) without breaking. Solutions that prove effective across multiple image samples or disaster types are valued.
Presentation of Results – Clarity in how results are presented to a user. For example, a visual map with highlighted damage zones or a well-formatted report of findings. A user-friendly output (like an overlay on the original image or a concise summary) will be scored well.
Completeness – Whether all key tasks were attempted. A submission that not only detects damage but also provides a summary or annotated map addressing the challenge end-to-end will score higher.
Programming – Python is recommended (for quick prototyping and plenty of CV/ML libraries). Participants can use Jupyter notebooks for rapid development.
Computer Vision – OpenCV paperswithcode.com for image processing (reading images, drawing annotations). It can be used for simple filtering or edge detection to pre-process images before analysis.
Machine Learning – Frameworks like TensorFlow/Keras or PyTorch to build or fine-tune a damage detection model. For example, a CNN classifier to distinguish “damaged” vs “undamaged” image tiles, or a pretrained object detector (like a YOLO model) fine-tuned on drone images of rubble medium.com.
Pre-trained Models – Given the short timeframe, leveraging pre-trained models is wise. Participants might use a model pre-trained on satellite imagery or COCO and fine-tune it on disaster images for detecting damaged buildings. Transfer learning can dramatically speed up development.
GIS Tools (optional) – If mapping is needed, libraries like Folium or QGIS (for creating geo-referenced maps) could be used, but a simple approach is to output annotated images.
Working Code – A well-documented script or notebook that ingests drone images and outputs the analysis (detected damage locations, annotated images, etc.). The code should be runnable with instructions.
Sample Output – Example output images (or video snippets) with damage annotations, or a generated map file highlighting affected areas. This helps judges see what your solution identifies.
Brief Report/Presentation – A short description of your approach (methods used, any models and how they were trained or tuned) and findings. Include any assumptions made and how the solution could be improved or expanded. This can be a few slides or a markdown README explaining the solution.
Evaluation Metrics – If possible, provide quantitative results on the dataset (e.g. “85% of damaged buildings correctly identified in test images”). This isn’t required but strengthens the submission by showing you validated your solution.