Evaluation

We have multiple metrics to measure systems's behaviour arranged in two main categories: performance and efficiency. 

Performance metrics 

Performance metrics are intended to measure how well systems achieve the proposed task in terms of prediction quality. We differentiate twotypes of performances: classification performance (Binary and multiclass classification) and latency-based measures (to promote systems that detect earlier).

Efficiency metrics 

Efficiency metrics are intended to measure the impact of the system in terms of resources needed and environmental issues. We want to recognize those systems that are able to perform the task with minimal demand for resources. This will allow us to, for instance, identify those technologies that could run on a mobile device or a personal computer, along with those with the lowest carbon footprint. To this end, each submission (each prediction sent to the server) must contain the following information:

A notebook with a sample code to collect this information is here. There are some changes in the use of the library with respect to the year before 2023.

Server

Each team will be provided with a token identifier when registering for the task to be used throughout this evaluation campaign. With this token, teams will be able to extract trial, training, and test datasets as well as send their predictions.

Teams can download trial and training data with the token, however, the test data will be provided by the server. So the team has to connect to our server using its token and the server will iteratively provide user writings by rounds. It means that each request will be followed by a server response with a new writing per user.  The server will only provide the next round of writings per user for the current round, so the team should keep a record of all the users and their writings in every round. After each request, the team has to give back to the server its prediction about each user.


NOTE: The server will be opened according to the dates indicated to simulate the evaluation phase with the trial data even if it can be downloaded. 

Download Trial Data and Train Data

Training and test datasets will be available according to the dates indicated. To extract them, users must access the server using the address given with the token identifier provided to extract the necessary data according to the tasks in which they are signed up.

Remember trial and train data are different, so it is recommended to merge both sets to have a larger amount of train data. 

Download Test Data (new)

After evaluation, test dataset and test gold labels is available in the same way that the trial and train data. To extract them, users must access the server using the address given with the token identifier provided to extract the data according to the tasks in which they are signed up.

GET Test Data (and trial data)

For the first GET request, the server outputs the first message of each user. To send a GET request:

The output format is the following:

[

    {

        "id_message": 123,

        "round": 1,

        "nick": "subject1",

        "message": "...",

        "date": "..."

    },

    {

        "id_message": 134,

        "round": 1,

        "nick": "subject10",

        "message": "...",

        "date": "..."    

    },

...

]

Attributes:

The first round contains all users in the collection (because all users have at least one message). However, after a few rounds, some users will disappear from the server's response. For example, a user with 10 messages will only appear in the first 10 rounds. Furthermore, the server does not inform the teams that a given user writing is the last one in the user's thread. The last round will be detected by the teams when they receive an empty list from the server. 

After each request, the team has to run its own prediction pipeline and give back to the server its prediction about each individual. 

POST predictions of Test Data (and trial data)

Each team has a limited number of three runs for each subtask they participate in. To submit the predictions, each team needs to send a POST request. A team can skip a run if they choose not to use all three runs by not sending a POST request and instead just making a GET request to proceed to the next round. To send a POST request:

For each subtask, the prediction file to be sent has a different format. 

[

    {

        "predictions": 

        {

            "subject1": "none",

            "subject10": "depression",

...

        },

        "emissions": 

        {

            "duration": 0.01,

            "emissions": 3.67552e-08,

            "cpu_energy": 8.120029e-08,

            "gpu_energy": 0,

            "ram_energy": 5.1587e-12,

            "energy_consumed": 8.1205e-08,

            "cpu_count": 1,

            "gpu_count": 1,

            "cpu_model": "Intel(R) Xeon(R) CPU @ 2.20GHz",

            "gpu_model": "1 x Tesla T4",

            "ram_total_size": 12.681198120117188,

             "country_iso_code": "USA"

        }

    }

]

[

    {

        "predictions": 

        {

            "subject1": "none",

            "subject10": "depression",

            ...

        },

        "contexts": 

        {

            "subject1": "none",

            "subject10": "social#work#emergency",

            ...

        },

        "emissions": 

        {

            "duration": 0.01,

            "emissions": 3.67552e-08,

            "cpu_energy": 8.120029e-08,

            "gpu_energy": 0,

            "ram_energy": 5.1587e-12,

            "energy_consumed": 8.1205e-08,

            "cpu_count": 1,

            "gpu_count": 1,

            "cpu_model": "Intel(R) Xeon(R) CPU @ 2.20GHz",

            "gpu_model": "1 x Tesla T4",

            "ram_total_size": 12.681198120117188,

             "country_iso_code": "USA"

        }

    }

]

[

    {

        "predictions": {

            "subject1": 0,

            "subject10": 1,

            ...

        },

        "emissions": {

            "duration": 0.01,

            "emissions": 3.67552e-08,

            "cpu_energy": 8.120029e-08,

            "gpu_energy": 0,

            "ram_energy": 5.1587e-12,

            "energy_consumed": 8.1205e-08,

            "cpu_count": 1,

            "gpu_count": 1,

            "cpu_model": "Intel(R) Xeon(R) CPU @ 2.20GHz",

            "gpu_model": "1 x Tesla T4",

            "ram_total_size": 12.681198120117188,

            "country_iso_code": "USA"

        }

    }

]

To facilitate participation, we have prepared an example of a client application that communicates with the server. The notebook is here.

IMPORTANT NOTE 1: 

We have provided a repository where you can find the evaluation script for 2023 and the rest of the scripts related to the competition.

IMPORTANT NOTE 2 (NEW): 

This year the submision procedure is new. Each time a team makes a GET request to the server, it will be provided with the data related to the round it is in. For the data to be updated to the next round, teams have to make all POSTs request sending their predictions (or empty predictions if they do not want to make use of the 3 runs).

However, if a team is in task 1 and 2, as the same dataset is used, the team must send six POSTs requests to update the round. Example:

URL/task1/submit/{token}/{0}

URL/task1/submit/{token}/{1}

URL/task1/submit/{token}/{2}

URL/task2/submit/{token}/{0}

URL/task2/submit/{token}/{1}

URL/task2/submit/{token}/{2}

The team must submit predictions for each round using the POST request to make sure their runs are considered valid.