Submit results ⬆️ 

The submission must be a zip file with a folder named as your team. You have to put one subfolder for each subtask and language in which you participated. Each prediction file must be a jsonl file, with one line per sample, where each line contain the id and the predicted label. The name of each prediction file is the run name.

On the left, you can see the directory tree of a submission from a team named test_team, that participated in the two subtasks with three runs in each one, named run1, run2, and run3. On the right, we show the jsonl file with the predictions of the run1 in the subtask_1.

test_team

├── subtask_1

│   ├── run1.jsonl

│   ├── run2.jsonl

│   └── run3.jsonl

└── subtask_2

    ├── run1.jsonl

    ├── run2.jsonl

    └── run3.jsonl

...

{"id": 13, "label": "generated"}

{"id": 14, "label": "human"}

{"id": 15, "label": "human"}

...

The zip file must be sent to organizers.autextification@gmail.com.

By submitting results to this competition, you consent to the public release of your scores at the IberLEF workshop and in the associated proceedings, at the task organizers' discretion. Scores may include, but are not limited to, automatically and manually calculated quantitative judgements, qualitative judgements, and such other metrics as the task organizers see fit. You accept that the ultimate decision of metric choice and score value is that of the task organizers.

You further agree that the task organizers are under no obligation to release scores and that scores may be withheld if it is the task organizers' judgement that the submission was incomplete, erroneous, deceptive, or violated the letter or spirit of the competition's rules. Inclusion of a submission's scores is not an endorsement of a team or individual's submission, system, or science.