We calculate the agreement score of an assertion by simply subtracting the percentage of times the assertion was disagreed with from the percentage of times the assertion was agreed with. The agreement score can be used to rank assertions from least agreement (−1) to most agreement (1). A score of 0 indicates that an equal number of participants agree and disagree with the assertion and that the assertion is therefore highly controversial. The top ranked assertions can now be used to better understand the debate on an issue.
We transform the comparative judgments for support and opposition, into scores ranging form −1 to 1. These scores can be used to rank assertions from most strongly supported (1) to most strongly opposed (−1). Selecting an assertion as ‘most oppose’ in the comparative annotations, may mean that she either most opposes or least supports the assertion. We can infer which of the two interpretations applies from the agreement judgments; i.e. ‘most oppose’ can be considered as ‘least support’ if a participants agrees to a statement and it remains ‘most oppose’ if the person disagrees. Analogously, ‘most support’ can be interpreted as ‘least oppose’ if one disagrees to the assertion and it can be interpreted ‘most support’ if one agrees to the assertion. Given this consideration, we additionally calculate a support score (ranging from ‘most strongly supported ’(1) to ‘least strongly supported ’(0)) and an oppose score (ranging from ‘most strongly opposed ’(1) to ‘least strongly opposed ’(0)).
The data can be found here: data.tsv
We also provide the matrices containing the agree/disagree decision per assertion per person (inlcuding demographics). These matrices can be found here.
The crowdsourced data can be used to determine which users show a similar response behavior and which assertions have been similarly voted on. Being able to judge similarity between assertions helps identify interrelated assertions. Voting similarity between participants can be used to generate guesses about their judgments on assertions which have not been made by them. We determine the voting similarity between pairs of users by computing the cosine of the vectors that represent the rows in the agreement matrix AM. We determine the degree by which two assertions are judged similarly by computing the cosine of the column vectors.
This study has been approved by the NRC Research Ethics Board (NRC-REB) under protocol number 2017-83. REB review seeks to ensure that research projects involving humans as participants meet Canadian standards of ethics.