Motivation

Download the full introduction to the competition paper:  '' A choice prediction competition for market entry games: An introduction,'' Games, Special Issue on Predicting Behavior in Games, 2010, 1 (2), 117-136. (some numbers were wrong in the initially published version: here is the correction.) 



Previous study of the effect of experience on economic behavior demonstrates the potential of the assumption that a general learning process drives adaptation to incentives in different settings.   This assumption was found to provide useful ex-ante predictions of behavior in several studies (e.g., Erev & Roth, 1998), and is consistent with the observation of similar reaction to reinforcements across species (e.g., Thorndike, 1898), and with the discovery that the activity of certain Dopamine neurons is correlated with one of the terms assumed by reinforcement learning models (see Schultz, 1998).

However, more recent studies reveal that the task of advancing beyond the demonstration of the potential is not simple.  Different attempts to quantify the assumed learning process appear to lead to different conclusions, and the relationship between the distinct results is not always clear (see review by Erev & Haruvy, 2010).

We believe that there are two main reasons for the inconsistencies in the learning literature.  The first is the fact that learning is only one of the factors that affect behavior in repeated games. Other important factors include: framing, fairness, reciprocation, and reputation.  It is possible that different studies reached different conclusions because they studied learning in environments in which these important factors have different implications.  A second cause of confusion is a tendency to focus on relatively small data sets and relatively small sets of models. 

The current project takes two measures to address these problems.  The first is an extensive experimental study of the effect of experience under conditions that minimize the effect of other factors.  The second is the organization of an open choice prediction competition that facilitates the evaluation of a wide class of models.  We ran a large “estimation study” experiment examining different problems drawn randomly from the same space (the algorithm for the problem selection is here: problem selection algorithm), and challenge other researchers to predict the results of the second study, referred to as the “competition study” based on evaluation of the results of the first study. 

We focus on repeated 4-person Market Entry games that involve both environmental and strategic uncertainty.  At each trial of these games each player has to decide (individually) between “entering a risky market”, or “staying out” (a safer prospect).  The payoffs depended on a realization of a binary gamble (the realization at trial t is denoted Gt, and yields “H with probability Ph; and L otherwise”), the number of entrants (E), and two additional parameters (k and S).  The exact payoff for player i at trial t is:

 

 

Vi =

10 – k(E) + Gt

If i enters

 

round(Gt/S) with p= .5; -round(Gt/S) otherwise

 

If i stays out

 

 

The competitors' task:

            The participants in each competition will be allowed to study the results of the estimation study.  Their goal will be to develop a model that will predict the results of the competition study.  The model should be implemented in a computer program that reads the parameter of the problems (k, H, Ph, L, S) as an input and predicts the entry rates, average payoff, and the alternation rate as an output. 

Additional information concerning the competitions can be found in the following pages:

1.      The "registration" and submission page explains the actions that have to be taken in order to participate in the tournaments.   

2.      The “problem selection algorithm” page presents the algorithm that was used to select the problems in the estimation set, and will be used to select the problems in the competition set. 

3.      The “Method” page presents the experimental method.

4.      The “aggregated results” page presents the problems that were studied in the estimation set and the aggregated results.

5.      The “raw data” page present the raw data for each of the participants.

6.      The "competition rules" page explains the time schedule, and the required features of the submissions.  

7.      The "baseline models" page present examples of possible submissions to the competition.

GOOD LUCK!