Due to this year's unique circumstances affecting ACL 2020, we have opened another competition round ("Round 2"). The rules and datasets are identical to the first round ("Round 1"), but with later dates.
Participants can participate in Round 1, Round 2, or both. For fairness and transparency, there will be separate leaderboards for the two rounds, and the round (1 or 2) will be noted in the leaderboards.
To download the datasets, you will need to participate by filling in the form. We will send you an email with a link to the evaluation microsite, where you can download the datasets as well as submit your predictions. Further instructions are available on the evaluation website.
We have two competition tracks: MAIN track (mandatory) and AUXILIARY track (optional), each with its own leaderboard.
It the MAIN track participants are not allowed to use external data. You can use standard, publicly available pretrained models and word embeddings, such as GloVe embeddings and pre-trained BERT models. However, custom pretrained models and custom word embeddings (i.e., models that were pretrained using custom data) are not allowed. .
The optional AUXILIARY track allows the use of external data and custom pretrained models and word embeddings.
Users who wish to participate in the AUXILIARY track must also participate in the MAIN track.
The paper should be submitted in ACL2020 style format with a maximum of 8 pages. Further information about the ACL2020 format is available on the ACL2020 website. There is also a ACL2020 template on Overleaf.
See here.
The MP4 files are provided for completeness only. We do not expect that the participants will use the video files as part of their model features. However, we might be surprised :)
The evaluation metric for this challenge is "mean average recall at 6", or MAR@6. Read more about the metrics.
The technical papers for this challenge will not be published/archived. However, we will upload them to the website for other workshop participants to view.
We encourage participants to share their code.
You can add a link to the Github repository in your paper. Here are two possible phrases you can use in your LaTeX file:
The source code for our model is available as a Github repository\footnote{https://github.com/...}.
The source code for our model is available as a Github repository at \texttt{https://github.com/...}
Once your report is available online, it's also a good idea to update your repository's README.md
to include a link back to the report.