Results

final results on the test set

Check our Codalab competition page [here].


BASELINE

We use the BERT model as a baseline using the following considerations:

  • We use the BERT base model.

  • We use the BETO pre-trained instance.

  • We fine-tuned the model 5 times using different random seeds and report the average F1 on paraphrase class.


The baseline obtained an F1 of 0.7026 on the paraphrase class.

The final ranking of participating teams will be displayed here.

Sponsors

task.parmex@gmail.com