Submission Instructions
Submission Instructions
Code and Results Submission
Important Dates:
Test set release: 12th June
Final code and results submission: 23rd June
Participants should submit their code and results via email (to errathri@gmail.com ).
If the attachment are too large, participants can create a zip uploaded on the cloud. However, the last edit should be by the final deadline (23rd of June).
We will release the test set without labels on the 12th of June and participants will have ten days to refine their models and submit their codes and results (deadline 23rd June).
Paper Submission
Deadline: 14th July via EasyChair (link will be added soon)
Submission Materials
The test dataset will be made available to researchers. By the submission deadline, participants will submit their models predictions, up to three for each task (RM, UA, IR) in which they decided to participate.
Each participant can submit three models/predictions per task.
Submissions must be fully reproducible - that is, given the models, the evaluation team should be able to obtain the same predictions from the test dataset. As such, submission materials for each task are:
y_pred (1 dimensional array of predictions from the test dataset)
script used to extract groundtruth arrays (i.e., function which takes as input the path to one or multiple session label csv file, and outputs a 1-dimensional array of the groundtruth labels of the data)
must be well documented so its use is intuitive
must include information about prediction frequency
model and model weights
seed used to obtain predictions
Evaluation
The submitted models, for each task, will be evaluated on two tracks: overall performance and time-tolerant performance.
Overall performance
We will rank models based on the combined rankings of accuracy and F1-score.
Example: models are ranked based on accuracy and given points based on their position (1,2,3...). The same process takes place to F1-score. The best model is that which the combined number of points is lowest (min = 2 points).
Time-tolerant performance
We will rank models based on the combined rankings of time-tolerant accuracy and F1-score.