Adversarial Writing of Quizbowl Questions

Website to accompany the TACL 2019 paper "Trick Me If You Can: Human-in-the-loop Generation of Adversarial Question Answering Examples". Presented at ACL, Poster Session 3 (Arsenale: July 26, Monday 16:00)

Data

Visit the data section of QANTA to download:

  • The adversarial datasets
  • Edit histories of the questions to see how the authors got to the adversarial questions
  • Additional Quizbowl data

For the impatient, there are human readable versions of the prelim and final questions used in the Dec 15 event.

Live Competition

Videos, questions, and winners on the description of the Dec 15, 2018 event.

Interface

Visit write.qanta.org to write adversarial Quizbowl questions.

Code

Visit our Github repository to see the code for the interface, models, and interpretations.

Our data is easily used in python:

import json

with open('qanta.tacl-trick.json') as f:
    data = json.load(f)
    print(f"Bibtex: {data['bibtex']}")
    print(f"Version: {data['version']}")
    print('Printing first few questions')
    print()
    for q in data['questions'][:3]:
        print('Question')
        print(q['text'])
        # 'answer' is the unnormalized answer, 'page' is the true answer
        print('Page: ', q['page'])
        # How the question was authored
        print('Interface:', q['interface'])
        print()

A playlist of all our videos explaining the adversarial question writing process, examples of questions, and how a live competition between computers and humans played out.

Eric Wallace

Pedro Rodriguez

Shi Feng

Ikuya Yamada

Jordan Boyd-Graber