ART

Dataset summary for ART A&B

Thank you for your interest in the ART A&B! We're delighted to make this available to the research community in JSON format. Please fill in the simple form at the bottom of this page letting us know who you are and what your interest is and we'll send you a link to the data to use in your own study. Please acknowledge your use of the data set in all publications based on it through the following publication (available here):

@misc{collier2022reality,

doi = {10.48550/ARXIV.2208.11981},

url = {https://arxiv.org/abs/2208.11981},

author = {Collier, Nigel H. and Liu, Fangyu and Shareghi, Ehsan},

title = {On Reality and the Limits of Language Data},

publisher = {arXiv},

year = {2022}

}

If you have any questions please email to Nigel Collier at nhc30@cam.ac.uk with 'ART data' in the subject header.

Dataset Structure

Data Instances

Each data point contains 3 fields. For example an instance from the art_full_set is:

{

"question_type": "relation",

"question": "Does happy have a similar meaning to tears?",

"answer": false

}

Data Fields

The description of the three fields of each data point:

  • question_type: a string; either relation or analogy, indicating whether this is a relation or analogy question.

  • question: a string containing the query.

  • answer: a boolean; either true, or false.

Data Splits

We provide two data splits (1) art_full_set and (2) art_commonsense_set, stored in two json files art_full_set.json and art_commonsense_set.json. art_full_set has 368 data points (48 analogy questions and 240 relation questions). It is the full dataset where ground truth answers are provided by an expert committee (the authors). art_full_set has 254 data points (26 analogy questions and 224 relation questions). This is a subset of the full set containing ART questions where the answers have high human agreement. Ground truths are human annotators commonly agreed answers. See our paper for more details.