TBU
Performance metrics are intended to measure how well systems achieve the proposed task in terms of prediction quality. We differentiate twotypes of performances: classification performance (Binary and multiclass classification) and latency-based measures (to promote systems that detect earlier).
Task 1: Standard classification metrics: precision, recall, micro- and macro-F1. Early-detection metrics adapted to conversational settings (e.g., ERDE or turn-based latency measures).
Task 2: Cohen’s kappa to measure agreement between expert and participant’s system in this binary setting. Accuracy metric.
Efficiency metrics are intended to measure the impact of the system in terms of resources needed and environmental issues. We want to recognize those systems that are able to perform the task with minimal demand for resources. This will allow us to, for instance, identify those technologies that could run on a mobile device or a personal computer, along with those with the lowest carbon footprint. To this end, each submission (each prediction sent to the server) must contain the following information:
Total RAM needed
Total % of CPU usage
Floating Point Operations per Second (FLOPS)
Total time to process (in milliseconds)
Kg in CO2 emissions. For this, the Code Carbon tool will be used.
A notebook with a sample code to collect this information is here.
Each team will be provided with a token identifier when registering for the task to be used throughout this evaluation campaign. With this token, teams will be able to extract trial, training, and test datasets as well as send their predictions.
Teams can download trial and training data with the token, however, the test data will be provided by the server. So the team has to connect to our server using its token and the server will iteratively provide user writings by rounds. It means that each request will be followed by a server response with a new writing per user. The server will only provide the next round of writings per user for the current round, so the team should keep a record of all the users and their writings in every round. After each request, the team has to give back to the server its prediction about each user.
NOTE: The server will be opened according to the dates indicated to simulate the evaluation phase with the trial data even if it can be downloaded.
The datasets will be available according to the dates indicated. To extract them, users must access the server using the address given with the token identifier provided to extract the necessary data according to the tasks in which they are signed up.
Trial data:
For Task1: URL/task1/download_trial/{token}
For Task2: URL/task2/download_trial/{token}
For the first GET request, the server outputs the first message of each user. To send a GET request:
Trial server: URL/{task}/getmessages_trial/{token}
Test server: URL/{task}/getmessages/{token}
Replace {task} with one of this strings: "task1" or "task2"
The output format is the following:
[
{
"id_message": 123,
"round": 1,
"role": "psychologist",
"message": "...",
"date": "...",
},
{
"id_message": 134,
"round": 1,
"role": "user",
"message": "...",
"date": "...",
},
...
]
Attributes:
id_message: internal identifier of the writing
round: the number of the round (from 1 to unknown)
role: psychologist or user
message: message's subject
date: the format of the date of the writing is 'YYYY-MM-DD HH:MM:SS'
The first round contains all users in the collection (because all users have at least one message). However, after a few rounds, some users will disappear from the server's response. For example, a user with 10 messages will only appear in the first 10 rounds. Furthermore, the server does not inform the teams that a given user writing is the last one in the user's thread. The last round will be detected by the teams when they receive an empty list from the server.
After each request, the team has to run its own prediction pipeline and give back to the server its prediction about each individual.
Each team has a limited number of three runs for each subtask they participate in. To submit the predictions, each team needs to send a POST request. A team can skip a run if they choose not to use all three runs by not sending a POST request and instead just making a GET request to proceed to the next round. To send a POST request:
Trial server: URL/{task}/submit_trial/{token}/{run}
Test server: URL/{task}/submit/{token}/{run}
For each subtask, the prediction file to be sent has a different format.
For the task associated with multiclass classification (task1), predictions will be 0 for “low risk” or 1 for “high risk”. The structure would be as follows:
[
{
"predictions":
{
"subject1": 0,
"subject10": 1,
...
},
"emissions":
{
"duration": 0.01,
"emissions": 3.67552e-08,
"cpu_energy": 8.120029e-08,
"gpu_energy": 0,
"ram_energy": 5.1587e-12,
"energy_consumed": 8.1205e-08,
"cpu_count": 1,
"gpu_count": 1,
"cpu_model": "Intel(R) Xeon(R) CPU @ 2.20GHz",
"gpu_model": "1 x Tesla T4",
"ram_total_size": 12.681198120117188,
"country_iso_code": "USA"
}
}
]
For the tasks associated with multiclass classification (task2), predictions will be 0 for “low risk” or 1 for “high risk”. Moreover, the type of gambling must be predicted: "betting", "onlinegaming", "trading", or "lootboxes". No matter whether the subject is predicted as low or high risk, it is always associated with a type of gambling problem. The prediction of the type of addiction to be considered for evaluation will be the one received in the last round. The structure would be as follows:
[
{
"predictions":
{
"subject1": 0,
"subject10": 1,
...
},
"type":
{
"subject1": "lootboxes",
"subject10": "onlinegaming",
...
},
"emissions":
{
"duration": 0.01,
"emissions": 3.67552e-08,
"cpu_energy": 8.120029e-08,
"gpu_energy": 0,
"ram_energy": 5.1587e-12,
"energy_consumed": 8.1205e-08,
"cpu_count": 1,
"gpu_count": 1,
"cpu_model": "Intel(R) Xeon(R) CPU @ 2.20GHz",
"gpu_model": "1 x Tesla T4",
"ram_total_size": 12.681198120117188,
"country_iso_code": "USA"
}
}
]
To facilitate participation, we have prepared an example of a client application that communicates with the server. The notebook is here.
NOTE 1:
We have provided a repository where you can find the evaluation script for 2023 and the rest of the scripts related to the competition.
NOTE 2:
Each time a team makes a GET request to the server, it will be provided with the data related to the round it is in. For the data to be updated to the next round, teams have to make all POSTs request sending their predictions (or empty predictions if they do not want to make use of the 3 runs).
However, if a team is in task 1 and 2, as the same dataset is used, the team must send six POSTs requests to update the round. Example:
GET request: URL/task1/getmessages/{token} # team get round 1 data
POST request:
URL/task1/submit/{token}/{0}
URL/task1/submit/{token}/{1}
URL/task1/submit/{token}/{2}
GET request: URL/task2/getmessages/{token} # team get the same round 1 data as before because it is in task 1 and 2 and it did not submit six predictions requests.
POST request:
URL/task2/submit/{token}/{0}
URL/task2/submit/{token}/{1}
URL/task2/submit/{token}/{2}
GET request: URL/task1/getmessages/{token} # team get round 2 data
The team must submit predictions for each round using the POST request to make sure their runs are considered valid.