Submission Instructions

Submission Instructions

Code and Results Submission

Important Dates:


Participants should submit their code and results via email (to errathri@gmail.com ).

If the attachment are too large, participants can create a zip uploaded on the cloud. However, the last edit should be by the final deadline (23rd of June).

We will release the test set without labels on the 12th of June and participants will have ten days to refine their models and submit their codes and results (deadline 23rd June).


Paper Submission

 Deadline: 14th July via EasyChair (link will be added soon)

Submission Materials

The test dataset will be made available to researchers. By the submission deadline, participants will submit their models predictions, up to three for each task (RM, UA, IR) in which they decided to participate.

Each participant can submit three models/predictions per task.

Submissions must be fully reproducible - that is, given the models, the evaluation team should be able to obtain the same predictions from the test dataset. As such, submission materials for each task are:


Evaluation

The submitted models, for each task, will be evaluated on two tracks: overall performance and time-tolerant performance.

Overall performance

We will rank models based on the combined rankings of accuracy and F1-score.

Example: models are ranked based on accuracy and given points based on their position (1,2,3...). The same process takes place to F1-score. The best model is that which the combined number of points is lowest (min = 2 points). 


Time-tolerant performance

We will rank models based on the combined rankings of time-tolerant accuracy and F1-score.