What is this?
Every so often, we run a human vs. computer question answering competition (see our past events). Our goal is to create a fun, interesting battlefield to test human vs. computer question answering. But we can only do it with your help.
You can participate by:
Creating systems (i.e., making a computer team)
Why?
AI is really good at answering questions—except when it isn't—and these systems aren't always very good at detecting when systems have gotten the wrong answer. The goal of this competition is to:
test how well models can gauge how well systems can detect their own accuracy (called "calibration" in the lingo)
compare systems using our calibration metric
compare humans' and computers' ability to calibrate their answers
Important Dates (more details on the specific sites):
Kickoff : July 27 at Northwestern University (Chicago Open weekend)
Question submission deadline for in-person events: October 10, 2024
In-person competition (ACF Fall Mirror at MIT and Berkeley): October 19, 2024
Question submission deadline for online events: November 15, 2024
System submission deadline: November 15, 2024
Online human player competition: November 24, 2024
System submission deadline (Last Call): December 1, 2024
Writer and System Winners announced: December 23, 2024
If you have any questions, please contact us at qanta@googlegroups.com.