Research

Book (in progress)

Structural Bayesian Techniques for Experimental and Behavioral Economics, With applications in R and Stan

Are you an Experimental Economics PhD student and want to learn about this material from me? Apply to the Learning, Computational and Bayesian Methods in Experimental Economics workshop at Purdue for May 2024!

If you would like me to teach this material in a short course at your institution, please contact me!

Research interests

Structural Econometrics, Bayesian Econometrics, Experimental Economics, Behavioral Economics, Game Theory

Publications

Bayesian Inference for Quantal Response Equilibrium in Normal-Form Games Games and Economic Behavior (article in press)

Abstract: This paper develops a framework for estimating Quantal Response Equilibrium models from experimental data using Bayesian techniques. Bayesian techniques offer some advantages over the more commonly-used maximum likelihood approach: (i) more favorable small-sample properties, and (ii) ease of handling unobservable heterogeneity. As Quantal Response Equilibrium is a non-linear model, I also discuss some issues with choosing appropriate priors.

Of the twenty-nine models taken to the data, the selected model assumes between-game heterogeneity in choice precision parameter λ, and some dispersion around the Quantal Response Equilibrium. Some implications of this model are discussed.

Replication files 

Learning Under Uncertainty with Multiple Priors: Experimental Investigation (with Yaroslav Rosokha) Journal of Risk and Uncertainty 62, no. 2 (2021): 157-176. 

Abstract: We run an experiment to compare belief formation and learning under ambiguity and under compound risk at the individual level. We estimate a four-type mixture model assuming that, for each type of uncertainty, subjects may either learn according to Bayes’ Rule or learn according to a multiple priors model of learning. Our results indicate that majority of subjects are Bayesian, both under compound risk and under ambiguity, while the second most frequent type are subjects that are Bayesian under compound risk but who use a multiple priors model of learning under ambiguity. In addition, we find strong evidence against a common assumption that participants’ initial beliefs (and priors) are consistent with information provided about the uncertain process. 

Information cascades in the classroom: the relationship between in-class feedback and course performance (with Amanda Cook and Andrew Meisner) The Journal of Economics and Politics: Vol. 26: Iss. 1, Article 3.  (2021)

Abstract: Technology is used in undergraduate courses to engage students and provide feedback about understanding. TopHat is an application which displays multiple choice questions mid-class. In this field experiment, we determine if displaying or hiding the distribution of peer responses has an impact on exam scores. When students see peer responses, we observe information cascades on both correct and incorrect answers. Getting an individual TopHat question correct predicts a 1.3 percentage point increase on final exam scores, however we find no difference in predictive power between treatments. Participating in one negative cascade predicts that a student will score approximately five percentage points lower on the final exam. Showing students peer feedback may be harmful, as they may get the question wrong in the presence of peer feedback when they may have otherwise answered correctly. Participating in a negative cascade predicts a five-percentage point reduction on final exam scores. 

Heterogeneous Trembles and Model Selection in the Strategy Frequency Estimation Method Journal of the Economic Science Association 6, pages 113–124 (2020) 

Abstract: The strategy frequency estimation method (Dal Bó and Fréchette in Am Econ Rev 101(1):411-429, 2011; Fudenberg in Am Econ Rev 102(2):720-749, 2012) allows us to estimate the fraction of subjects playing each of a list of strategies in an infinitely repeated game. Currently, this method assumes that subjects tremble with the same probability. This paper extends this method, so that subjects’ trembles can be heterogeneous. Out of 60 ex ante plausible specifications, the selected model uses the six strategies described in Dal Bó and Fréchette (2018), and allows the distribution of trembles to vary by strategy. 

Measuring and Comparing Two Kinds of Rationalizable Opportunity Cost in Mixture Models. Games 11, no. 1 (2020): 1. 

Abstract: In experiments of decision-making under risk, structural mixture models allow us to take a menu of theories about decision-making to the data, estimating the fraction of people who behave according to each model. While studies using mixture models typically focus only on how prevalent each of these theories is in people’s decisions, they can also be used to assess how much better this menu of theories organizes people’s utility than does just one theory on its own. I develop a framework for calculating and comparing two kinds of rationalizable opportunity cost from these mixture models. The first is associated with model mis-classification: How much worse off is a decision-maker if they are forced to behave according to model A, when they are in fact a model B type? The second relates to the mixture model’s probabilistic choice rule: How much worse off are subjects because they make probabilistic, rather than deterministic, choices? If the first quantity dominates, then one can conclude that model a constitutes an economically significant departure from model B in the utility domain. On the other hand, if the second cost dominates, then models a and B have similar utility implications. I demonstrate this framework on data from an existing experiment on decision-making under risk. 

How many games are we playing? An experimental analysis of choice bracketing in games (Job market paper) Journal of Behavioral and Experimental Economics, Volume 80, June 2019, Pages 80-91

Abstract: A subject brackets two decisions if she ``choose[s] an option in each case without full regard to the other'' (Rabin and Weizsacker, 2009). Although in most situations such behavior is unlikely to be optimal, it is well documented in experiments where subjects make decisions in the absence of strategic considerations. This paper uses an economic experiment to investigate whether subjects also bracket their decisions in games. Subjects played two Volunteer's Dilemmas at the same time, with the payoffs from both games added to their earnings. In a lottery task, subjects were generally revealed to be risk-averse narrow bracketers. Aggregate play in the Roommate's Dilemma is not consistent with predictions made by assuming all subjects either narrowly or broadly bracket. On the individual level, structural modeling suggests that most subjects bracket narrowly in the game.

GitHub repository for replication files (Matlab and Stata): https://github.com/JamesBlandEcon/HowManyGames 

Random effects probit and logit: understanding predictions and marginal effects (with Amanda Cook) Applied Economics Letters, Published online: 23 Feb 2018

Abstract: Random effects probit and logit are nonlinear models, so we need predicted probabilities and marginal effects to communicate the economic significance of results. In these calculations, how one treats the individual-specific error term matters. Should one (i) set them equal to zero or (ii) integrate them out? We argue that (ii) is the quantity that most readers would expect to see. We discuss these in the context of the statistical package Stata, which changed its default predictions from (i) to (ii) in version 14. In Appendix 5, we illustrate how to calculate predictions and marginal effects using method (ii) in Stata 13 and earlier.

Coordination with third-party externalities (with Nikos Nikiforakis) The European Economic Review, Volume 80, November 2015, Pages 1–15

Abstract: When agents face coordination problems their choices often impose externalities on third parties. If an agent cares about them or believes others do, they can affect equilibrium selection. We present evidence from lab experiments showing that changes in the size and the sign of third-party externalities have a significant impact on tacit coordination. Decision makers are more willing to incur a cost to try to avoid imposing a large negative externality on a third party, than they are to avoid a small negative externality or to generate a large positive externality. However, when decision-maker's incentives are at odds with the interests of third parties, many of them appear to ignore third-party externalities even if they are large in magnitude, and ignoring them implies substantial earning inequalities and reductions in group earnings. Individuals revealed to be other-regarding in a non-strategic allocation task often behave as-if selfish when trying to coordinate. We discuss explanations for our findings.

A detailed chemical kinetic model for pyrolysis of the lignin model compound chroman (with Gabriel da Silva ). AIMS Environmental Science, Volume 1, 2013, pp 12-25

Dimensional Cost Exponents for Novel Processing Equipment (with Alexandra Kingsbury, and Andreas Mönch), AusIMM (2012), Proceedings Project Evaluation 2012 , pp 189-194

Working papers

Approximate computation and estimation of quantal response equilibrium through simulation 

Abstract: I propose a simulation-based method of approximately computing Quantal Response Equilibrium. The method can also be adapted for Bayesian estimation of a game's parameters. I demonstrate this approximation and estimation procedure using an Asymmetric Chicken game. Further examples are provided in an online appendix. 

Combining decision-level data from multiple experiments: what is the pooled estimator doing? 

Abstract: When analyzing decision-level data from more than one economic experiment, the pooled OLS estimator is a weighted sum of (i) within-experiment treatment effects, and (ii) an estimate of between-experiment treatment effects. The latter is likely biased. I discuss some implications of the weighting.

Optimizing experiment design for estimating parametric models in economic experiments 

Abstract: Estimating parametric models of behavior from economic experiments is becoming increasingly common. This paper discusses methods for designing an experiment so that parameters in these models are estimated as precisely as possible. Using an example of designing a battery of pairwise lottery choices, I demonstrate how experiments can be designed for various inferential goals.

Short presentation

Computing quantal response equilibrium in some Bayesian games using marginal strategy profiles 

Abstract: In some Bayesian games, players' utility only depends on the marginal distribution of opponents' actions, and not the joint distribution of actions and types. In these games, marginal Quantal Response Equilibrium (QRE) mixed strategy profiles can be computed without computing this joint distribution. 

Replication code


Quantal response equilibrium as a structural model for estimation: The missing manual (with Theodore Turocy)

Abstract: One of the original objectives of the (logit) quantal response equilibrium (LQRE) model was to provide a method for structural estimation of behaviour in games, when behaviour deviated from Nash equilibrium predictions. To date, only Chapter 6 of the book on quantal response equilibrium by Goeree et al. (2016) focuses on how such estimation can be implemented. We build on that chapter to provide here a more detailed treatment of the methodological issues of implementing maximum likelihood estimation of QRE. We compare the equilibrium correspondence and empirical payoff approaches to estimation, and identify some considerations in interpreting the results of those approaches when applied to the same data on the same game. We also provide a more detailed "field guide" to using numerical continuation methods to accomplish estimation, including guidance on how to tailor implementations to games with different structures. 

Sample predictor-corrector codes

Bayesian model selection and prior calibration for structural models in economic experiments: some guidance for the practitioner

Abstract: Bayesian estimates from experimental data can be influenced by highly diffuse or "uninformative" priors. This paper discusses how practitioners can use their own expertise to critique and select a prior that (i) incorporates our knowledge as experts in the field, and (ii) achieves favorable sampling properties. I demonstrate these techniques using data from eleven experiments of decision-making under risk, and discuss some implications of the findings. 

Monotonicity, Non-Participation, and Directed Search Equilibria (with Simon Loertscher)

Abstract: We consider the canonical directed search framework in which sellers play pure strategies and assume that buyers play strategies that are monotone in prices, can remain inactive and choose to do so whenever their payoff from participating is zero regardless of what the other buyers do. We show that directed search equilibria, which have been the focus of the literature, are the only equilibria that satisfy these assumptions. Directed search equilibria are selected here not because buyers cannot coordinate – no such assumption is made – but because they fail to play strategies that require them to increase the demand for a seller’s good as this good becomes more expensive.

Work in progress

Optimizing experiment design for estimating parametric models in economic experiments

Mixture models of behavior and nuisance parameters: a semi-parametric Bayesian approach (with Justin Tobias)

Abstract: Structural estimation of behavioral models from experimental data has moved from identifying a single model from a menu of alternatives that best explains behavior to estimating the fraction of subjects who behave according to each model, and how these fractions vary with subject characteristics (e.g. Andreoni and Vesterlund, 2001; Harrison and Rutstrom, 2009). Each model typically specifies a function describing behavior, but also requires individual level parameters (such as measures of inequality aversion, risk aversion etc.) that must also be estimated.

With the goal of placing minimal structure on the distribution of these individual-level parameters, we take on the problem of estimating the population fractions of each model, and how these probabilities vary with subject characteristics such as sex, age, and race. We model the probability that a subject behaves according to a particular decision rule as a multinomial probit, and assume that the individual level parameters can be adequately modeled as a fi.nite mixture of normals. We describe a Bayesian estimator that achieves this, and apply it to existing experimental datasets. In addition to the probit parameters, we also obtain shrinkage estimates of the individual-level parameters, and the posterior probability that each subject makes decisions according to each model.

Hospital-Insurer Bargaining Power and Negotiated Rates (with Amanda Cook)

Abstract: In the US health care system, insurance plays two roles. One role is risk-sharing. The other, the focus of this study, is that insurance companies negotiate with hospitals for lower prices, so that prices for the same services can vary depending which entity, if any, is insuring the patient. We approach this negotiation between hospitals and insurers as an unstructured bargaining problem. Insurers (and hopefully their customers) benefit from negotiating through lower prices, while hospitals benefit through higher demand for their services. Using the Massachusetts Center for Health Information and Analysis (CHIA) data set, we estimate negotiated prices specific to hospital-insurer pairs. Assuming that these prices are the result of generalized Nash bargaining, we structurally estimate hospitals' and insurers' bargaining power, and how this varies with hospital, insurer, and market characteristics. We assess to what extent the lower prices are passed on to consumers. The results of the estimation are also used to assess possible effects of changes in government policy and market structure, and their implications for consumers' welfare.