Frequently Asked Questions

Why are there 2 rounds of the competition? What is the difference between them?

Due to this year's unique circumstances, we have opened another competition round ("Round 2"). The rules and datasets are identical to the first round ("Round 1"), but with later dates and more training data. Round 1 is a practice phase (optional), which the leaderboard will not be considered in the final result. The final result will be based on the leaderboard in Round 2.

Participants can participate in Round 1, Round 2, or both. For fairness and transparency, there will be separate leaderboards for the two rounds, and the round (1 or 2) will be noted in the leaderboards.

Where can I download the data?

To download the datasets, you will need to participate by filling google sheet. We will send you an email with a link to the evaluation microsite, where you can download the datasets as well as submit your predictions. If you do not receive the email before April 28, please contact us. Further instructions are available on the evaluation website.

What are the different competition tracks?

We have two competition tracks: MAIN track (mandatory) and AUXILIARY track (optional), each with its own leaderboard.

It the MAIN track participants are not allowed to use external data. You can use standard, publicly available pretrained models and word embeddings, such as GloVe embeddings and pre-trained BERT models. However, custom pretrained models and custom word embeddings (i.e., models that were pretrained using custom data) are not allowed. .

The optional AUXILIARY track allows the use of external data and custom pretrained models and word embeddings.

Users who wish to participate in the AUXILIARY track must also participate in the MAIN track.

What should be the format of the technical report ?

The paper should be submitted in NAACL2021 style format with a maximum of 8 pages. Further information about the NAACL2021 format is available on the NAACL2021 website. There is also a NAACL2021 template on Overleaf.

What is the submission file format?

See here.

Why do you provide the MP4 files?

The MP4 files are provided for completeness only. We do not expect that the participants will use the video files as part of their model features. However, we might be surprised :)

What are the evaluation metrics for this year's challenge?

The evaluation metric for this challenge is "F1-score". Read more about the metrics.

Will the technical papers be published?

The technical papers for this challenge will not be published/archived. However, we will upload them to the website for other workshop participants to view.

I put my code on Github. How can I share it?

We encourage participants to share their code.

You can add a link to the Github repository in your paper. Here are two possible phrases you can use in your LaTeX file:

  • The source code for our model is available as a Github repository\footnote{https://github.com/...}.

  • The source code for our model is available as a Github repository at \texttt{https://github.com/...}

Once your report is available online, it's also a good idea to update your repository's README.md to include a link back to the report.