Start Date: February 29 End Date:  24 May Competition URL: HuggingFace

Motivation

Creating a robust system to identify snake species from photos is crucial for biodiversity and global health, given the significant impact of venomous snakebites. With over half a million annual deaths and disabilities, understanding the global distribution of 4,000+ snake species through image differentiation enhances epidemiology and treatment outcomes. Despite machines showing accuracy in predictions, especially with long-tailed distributions and 1800 species, challenges persist in neglected regions. The next step involves testing in specific tropical and subtropical countries while considering species' medical importance for more reliable machine predictions.

Snake species identification, challenging for both humans and machines, is hindered by high intra-class and low inter-class variance, influenced by factors like location, color, sex, and age. Visual similarities and mimicry further complicate identification. Incomplete knowledge of species distribution by country and images originating from limited locations adds complexity. Many snake species resemble those from different continents, emphasizing the importance of knowing the geographic origin for accurate identification. Regularization across all countries is vital, considering that no location hosts more than 126 of the 4,000 snake species.

Task Description

The SnakeCLEF challenge aims to be a major benchmark for observation-based snake species identification. The goal of the task is to create a classification model that returns a ranked list of predicted species for each set of images and location (i.e., snake observation) and minimize the danger to human life and the waste of antivenom if a bite from the snake in the image were treated as coming from the top-ranked prediction.

The classification model must fit limits for memory footprint and a prediction time limit.

Evaluation protocol

Similarly, as last year, we ask participants to provide any pipeline to predict snake species on unseen images. Participants must submit their pipelines as HuggingFace models and submit them through the competition space. 

Participants can run anz model ar architecture but must fit limits for memory footprint and a prediction time limit (60 minutes) within a given HuggingFace server instance; (Nvidia T4 small 4vCPU, 15GB RAM, 16GB VRAM).

This competition provides an evaluation ground for developing methods suitable for not just snake species recognition. We want you to evaluate new bright ideas rather than finishing first on the leaderboard.

Sample submission

Metrics

Same as last year, we will calculate four metrics

The first two are standard macro averaged F1-score and Accuracy. 

To motivate research in recognition scenarios with uneven costs for different errors, such as mistaking a venomous snake for a harmless one, we will again go beyond the 0-1 loss common in classification. In addition to Accuracy and macro averaged F1, we use two metrics (introduced last year) that consider venomous ←→ harmless confusion and different error costs.

Both "unusual metrics" are explained on the competition website. The code is provided on GitHub.

Context

This competition is held jointly as part of:

The participants are required, in order to participate in the LifeCLEF lab to register using this form (and checking "Task 5 - SnakeCLEF" of LifeCLEF). 

Only registered participants can submit a working-note paper to peer-reviewed LifeCLEF proceedings (CEUR-WS) after the competition ends.

This paper should provide sufficient information to reproduce the final submitted runs. Only participants who submitted a working-note paper will be part of the officially published ranking used for scientific communication.

Publication Track

All registered participants are encouraged to submit a working-note paper to peer-reviewed LifeCLEF proceedings (CEUR-WS) after the competition ends. This paper must provide sufficient information to reproduce the final submitted runs.

Only participants who submitted a working-note paper will be part of the officially published ranking used for scientific communication.

The campaign results appear in the working notes proceedings published by CEUR Workshop Proceedings. Selected contributions among the participants will be invited for publication in the following year in the Springer Lecture Notes in Computer Science (LNCS).

Timeline

Unless otherwise noted, all deadlines are at 11:59 PM CET on a corresponding day. The competition organizers reserve the right to update the contest timeline if they deem it necessary.

Organizers