Submission to either task can be done using the form at the bottom of the page.
The form allows you to submit either a primary run or an additional run for one of the tasks. To submit multiple runs, or for multiple tasks, you must fill and send the form multiple times. All additional runs will be considered.
In the case you chose to re-submit your primary run you can, but note that only the LAST SUBMITTED primary run will be used to rank your system.
The form will ask you to chose a nickname for your team and a nickname for your run. Please, keep the team nickname identical if you submit multiple runs and/or for multiple tasks.
We will also ask you for a brief (3-4 lines max) description of the system for each run.
For both tasks, you can submit your runs either:
DIRECT SUBMISSION. by uploading via the form a Jsonl file corresponding to the test set of the task you are submitting for, complete with
For Task 1, The explanation field (str) for each latent
For Task 2, The activating field (bool) for each example for each latent
HUGGINGFACE DATASET SUBMISSION. by providing the name of a HuggingFace dataset that includes at least the `test` split of the HuggingFace dataset of the task you are submitting for, complete with:
For Task 1, The explanation field (str) for each latent
For Task 2, The activating field (bool) for each example for each latent
If you chose to submit a HuggingFace dataset, you could:
Download the dataset
Edit the empty/default fields of the test set (see the Data page) using your system
Push the dataset to the hub as e.g. for your primary run for Task 1: yourname/EXPLAINITA-task1-primary
If you chose to submit via HuggingFace, we suggest you to make the datasets for your runs gated with manual review of requests, so that you can verify that us and only us can access the dataset.
At the end of the evaluation window, we plan to display the results (both primary and additional runs) via a HuggingFace leaderboard.