The competition is open to everyone.
One of our major goals, when conceiving the competition, was to design it to provide an efficient separation between the performances of different reward-schedules derived from different behavioral models. That is, to maximize the probability that if the "right" model (the one utilized for decision making in the experiment) is used for the optimization of a reward schedule, the competition mechanism will successfully identify this schedule as the winner. Depending on the composition of simulated models and their parameters, our simulations suggested that optimal probability for correct winner identification is achieved for approximately 25 rewards.
All the data collected in the competition will become publicly available. Some behavioral data is available prior to the beginning of the competition (see Data).
We believe that requiring our models to generate testable, measurable tools has the potential to significantly push related research forward.
Consider the field of deep learning, before and after the ImageNet challenge. What has changed in terms of theory? At the very beginning, not much. But suddenly the challenge and opportunity to solve a practical problem using tools, that were at least 20 years old, swept the whole field which in turn generated great theoretical advancements.
The additional constraint of having to be effective in "real world" scenarios forces us to take into account factors we often discount when all we're interested in is just fitting a specific dataset to a model.
The competition offers a novel framework for testing behavioral models. We believe that the need to be compatible with real world constraints, namely engineering, has the potential to push the field forward both in terms of applications and theory.