Start Date: 13 March End Date:  24 May Competition URL: HuggingFace

Motivation

Automatic recognition of fungi species assists mycologists, citizen scientists, and nature enthusiasts identify wild species. Its availability supports the collection of valuable biodiversity data. In practice, species identification typically does not depend solely on the visual observation of the specimen but also on other information available to the observer - such as habitat, substrate, location and time. Thanks to rich metadata, precise annotations, and baselines available to all competitors, the challenge provides a benchmark for image recognition with the use of additional information. Moreover, the toxicity of a mushroom can be crucial for a mushroom picker's decision. We will explore the decision process within the competition beyond the commonly assumed 0/1 cost function.

Task Description

Given the set of real fungi species observations and corresponding metadata, the goal of the task is to create a classification model that returns a ranked list of predicted species for each observation (multiple photographs of the same individual + geographical location). 

The classification model must fit limits for memory footprint and a prediction time limit (120 minutes) within a given HuggingFace server instance (Nvidia T4 small 4vCPU, 15GB RAM, 16GB VRAM).

Note: Since the test set contains multiple out-of-the-scope classes. The solution has to handle such classes.

Evaluation protocol

Similarly, as last year, we ask participants to provide any pipeline to predict snake species on unseen images. Participants must submit their pipelines as HuggingFace models and submit them through the competition space. 

Participants can run anz model ar architecture but must fit limits for memory footprint and a prediction time limit (120 minutes) within a given HuggingFace server instance; (Nvidia T4 small 4vCPU, 15GB RAM, 16GB VRAM).

This competition provides an evaluation ground for developing methods suitable for not just snake species recognition. We want you to evaluate new bright ideas rather than finishing first on the leaderboard.

Sample submission

Metrics

Same as last year, we will calculate three custom metrics and macro averaged F1-score and Accuracy. 

All "unusual metrics" are explained on the competition website. The code is provided on GitHub.

Context

This competition is held jointly as part of:

The participants are required, in order to participate in the LifeCLEF lab to register using this form (and checking "Task 2 - FungiCLEF" of LifeCLEF). 

Only registered participants can submit a working-note paper to peer-reviewed LifeCLEF proceedings (CEUR-WS) after the competition ends.

This paper should provide sufficient information to reproduce the final submitted runs. Only participants who submitted a working-note paper will be part of the officially published ranking used for scientific communication.

Publication Track

All registered participants are encouraged to submit a working-note paper to peer-reviewed LifeCLEF proceedings (CEUR-WS) after the competition ends. This paper must provide sufficient information to reproduce the final submitted runs.

Only participants who submitted a working-note paper will be part of the officially published ranking used for scientific communication.

The campaign results appear in the working notes proceedings published by CEUR Workshop Proceedings. Selected contributions among the participants will be invited for publication in the following year in the Springer Lecture Notes in Computer Science (LNCS).

Timeline

Unless otherwise noted, all deadlines are at 11:59 PM CET on a corresponding day. The competition organizers reserve the right to update the contest timeline if they deem it necessary.

Organizers