Task 2: Claim Retrieval

Don't forget to register through CLEF2020 Lab Registration before 26 April 2020, using this link. Otherwise, your submission will NOT be considered!

Definition

Given a check-worthy claim and a dataset of verified claims, rank the verified claims, so that those that verify the input claim (or a sub-claim in it) are ranked on top. This task will run in English.

Evaluation

This task is evaluated as a ranking task. Ranked list per claim will be evaluated using ranking evaluation measures (MAP@k for k=1,3,5,10,20,all, MRR and Recall@k for k=1,3,5,10,20). Official measure is MAP@5.

Submission Runs

Each team can submit up to one primary and two contrastive submissions. Ranking will be based on the primary submissions, but results will be presented for the contrastive ones as well.

Submission Format

Each row of the result file is related to a pair tweet and verified_claim and intuitively indicates the ranking of the verified claim with respect to the input tweet. Each row has the following format:

tweet_id Q0 vclaim_id rank score tag
359  0 303 1 1.1086 elastic
359  0 512 2 1.0685 elastic
514  0 107 1 4.5401 elastic
...

Where a score is a number indicating whether the verified claim could be use to fact-checl the tweet, the rank is the rank of the tweet according to its score, and the tag is a unique ID for one of the runs of the team.

Your result file MUST have at most 1,000 rows (each one referring to one verified claim) per input tweet. Otherwise the scorer will not score this result file.

Please check all the details here.