About

Understanding human decision-making is a key focus of behavioral economics, psychology, and neuroscience with far-reaching applications, from public policy to industry. Recently, advances in machine learning have resulted in better predictive models of human decisions [3, 5] and even enabled new theories [2, 4] of decision-making. On the other hand, machine learning systems are increasingly being used to make decisions that affect people, including hiring, resource allocation, and paroles. These lines of work are deeply interconnected: learning what people value is crucial both to predict their own decisions and to make good decisions for them. In this workshop, we will bring together experts from the wide array of disciplines concerned with human and machine decisions to exchange ideas around three main focus areas:

  • Theories of decision making → better machine learning methods

How can we leverage insights from mathematical and behavioral theories of decision making to improve machine learning methods for predicting human decisions? What inductive biases or other methods do these theoretical insights entail? How can we incorporate deviations from traditional economic rationality (e.g., context effects) into predictive models of choice? Answers could lead to more sample-efficient, accurate, or interpretable machine learning methods for predicting human decisions.


  • Machine learning → better theories of decision making

How can we best leverage machine learning models to improve theoretical (explanatory/interpretable) accounts of human decision making? How can we derive meaningful and interesting explanations of how people decide from more accurate machine learning models? What types of models best align with real-world decision making behavior and what does this tell us about behavior? Which datasets are most conductive to training machine learning models concerned with decision and choice? Considering the centrality of the study of decision making in the social sciences, answers could find applications beyond the study of decision making.


  • Improving the interaction between people and decision-making AI

As artificial intelligence is increasingly used to make decisions people care about, it becomes imperative to identify the utilities and preferences of people affected by those decisions. A natural approach towards this aim is to use machine learning to learn these functions, which raises many questions. As human data is scarce and hard to get, how can we learn the utilities of people in a sample efficient way? Should we rely on data driven models or incorporate expert knowledge in our learning algorithms? How can we make our learning algorithms understandable and interpretable resulting with decisions that are more likely to be trusted and adopted by experts in healthcare, law, banking and education? How do we ensure fairness in decision-making systems?


[1] Colin F Camerer. Artificial intelligence and behavioral economics. In The Economics of Artificial Intelligence, pages 587–610. University of Chicago Press, 2019.

[2] Drew Fudenberg and Annie Liang. Machine learning for evaluating and improving theories. ACM SIGecom Exchanges, 18(1):4–11, 2020.

[3] Jake M Hofman, Amit Sharma, and Duncan J Watts. Prediction and explanation in social systems. Science, 355(6324):486–488, 2017.

[4] Joshua Peterson, David Bourgin, Mayank Agrawal, Daniel Reichman, and Thomas L. Griffiths. Using large-scale experiments and machine learning to discover theories of human decision- making. Science, 372:1209–1214, 2021.

[5] Ariel Rosenfeld and Sarit Kraus. Predicting human decision-making: From prediction to action. Synthesis Lectures on Artificial Intelligence and Machine Learning, 12(1):1–150, 2018.

[6] Matthew J Salganik, Ian Lundberg, Alexander T Kindel, Caitlin E Ahearn, Khaled Al-Ghoneim, Abdullah Almaatouq, Drew M Altschul, Jennie E Brand, Nicole Bohme Carnegie, Ryan James Compton, et al. Measuring the predictability of life outcomes with a scientific mass collaboration. Proceedings of the National Academy of Sciences, 117(15):8398–8403, 2020.