Wildlife Detection Challenge
Overview
Drone imagery can be used to detect wildlife. Elephants are identified in this aerial image (red boxes), illustrating how AI can count and track animals from drone photos.
Animal Detection – Build a model to detect specific wildlife species in aerial images. For example, detect elephants, zebras, rhinos, or any visible large animal in the savannah environment. This involves outputting bounding boxes or points where animals are located in the image.
Counting/Population Estimation – Based on detections, count the number of animals in each image or across an area. The challenge could be framed as counting animals in each drone snapshot. This is valuable for wildlife population surveys (e.g., “we counted 23 elephants in this reserve area today”).
Poacher or Vehicle Detection – As an alternate or addition, detect the presence of humans or vehicles in a protected area (which could indicate poaching or illegal entry). Drones are often used to patrol reserves; an automated system could flag human presence where it’s not expected.
Output Visualization – Mark detected animals on the image (e.g., draw boxes or colored dots on each detected animal) and perhaps provide the total count. If focusing on multiple species, use different markers for each (e.g., red dot for elephants, blue for giraffes). For poacher detection, highlight the persons/vehicles and alert with a message.
Savanna Wildlife Aerial Images – A dataset from Eikelboom et al. (2019) containing 561 aerial images with ~4,305 bounding box annotations for animals like zebras, giraffes, and elephants github.com. This dataset (available via the open LILA BC repository or 4TU Data) provides drone/aircraft images of wildlife with labels, ideal for training and testing your model. Each image is annotated with bounding boxes around each animal of the target species.
(Additionally, “Drones count wildlife” (Hodgson et al. 2018) dataset has images of faux bird colonies for counting, and other aerial wildlife datasets exist github.com. Participants can use any available data of animals from drones. If needed, one could even simulate data by placing animal cut-outs on terrain backgrounds, but using real datasets like the above is recommended.)
Detection/Counting Accuracy – The solution will be judged on how accurately it detects animals or intruders. If ground truth counts are available, the absolute error in counts can be used (e.g., did you count the correct number of elephants in each image?). For detection, metrics like precision/recall on the annotated bounding boxes could be considered. Essentially, the closer your counts are to the true counts, the better.
Species Identification – If the challenge includes multiple species (say elephants vs zebras), the ability to correctly distinguish and count each is evaluated. A solution that just counts objects without distinguishing might be acceptable if not specified, but identifying the species adds more value. Misclassifying one species as another would count against accuracy.
Robustness to Environment – Wild environments can have camouflaged animals, varying lighting (dappled forest vs open savannah), and different altitudes. Judges will check how the approach might handle these. For example, does a change in background (green grass vs dry dirt) confuse the detector? Solutions that mention handling these (or demonstrate using diverse training data) get credit.
False Positive Control – Especially for poacher detection, false positives (mistaking a bush or rock for an animal/human) can waste ranger resources. Solutions that employ steps to reduce false alarms (like a second verification stage, or ignoring very small detections that are likely errors) are rated higher.
Usability for Conservation – If the team provides outputs or tools that a wildlife researcher could directly use (like a simple interface, or a well-formatted report of counts per image/area), that’s a plus. Essentially, going the extra mile to think “How would someone use this data?” – for instance, outputting a CSV of counts for each survey flight image – can make the solution more attractive.
Deep Learning Models – For animal detection, models like Faster R-CNN or YOLO can be fine-tuned on wildlife data.
Data Processing – Many aerial wildlife images are high-resolution. It may help to tile the images into smaller patches before running detection (to ensure the animals aren’t too small relative to the input size of the model).
Computer Vision (non-DL) – In some cases, classical CV might work if animals are very contrasted (e.g., white goats on brown ground).
Ensembling – If time permits, an ensemble of methods can improve reliability (e.g., you detect animals with a ML model and also do a simple edge-detection + shape analysis, and only count a detection if both agree).
Hardware/Framework – Use whatever you’re comfortable with. If using TensorFlow, their Object Detection API has pre-trained models that could be fine-tuned. If using PyTorch, libraries like torchvision (with Faster R-CNN pretrained) or YOLOv5 scripts can be utilized. Ensure you have the means to train on the dataset (which might be a few hundred images – manageable in a day with transfer learning on a GPU).
Annotated Image Outputs – Provide example images where your system has marked the animals (or intruders). For instance, an output image with red boxes around each detected elephant in a herd, with a label or count. This visual proof is the easiest way for judges to verify what your system is finding.
Counting Results – If the problem is presented as counting, provide the counts. This could be in a simple text or table form. E.g., “Image 10 – Elephants detected: 5”. If focusing on trends (say you had a time series of images), you could show how counts change. But since this is a one-day challenge, a per-image count listing is sufficient.
Code – The code used for detection and counting, with documentation. Include any training routine if you trained a model during the hackathon (or mention if you used a pre-trained model out-of-the-box). Ensure that the judges can run the detection on a sample image through your code (provide a sample input and instructions).
Model Files – If you trained a custom model, include the saved model weights (if file size allows) or provide a link to download it. This way judges can replicate your results. Clearly mention which framework and model architecture was used.
Brief Report – Describe your approach and any interesting findings. For example, “Our model sometimes confused large bush clusters as groups of animals – we addressed this by…”. Explain any assumptions (e.g., assuming a certain range of animal size in pixels). Also, if you did any error analysis, mention what types of errors occurred (missed animals in dense forest, etc.) and how you might tackle those with more time.
Future Use Considerations – One or two lines on how this could assist conservationists: e.g., “With this tool, park rangers can quickly survey drone footage each morning to count herds, replacing manual counting which is time-consuming. In the future, integrating GPS coordinates and tracking moving animals across frames could estimate animal paths and home ranges.” This helps reinforce the significance of your solution beyond just the hackathon demo.