Working Papers (available upon request)
Choosing Between Algorithmic Forecasters: What drives Delegation?, with M. Chevrier and F. Fidanoski
Under review at Experimental Economics
Algorithm Control and Responsibility: Shifting Blame to the Programmer?, with M. Chevrier
Under review at Journal of Behavioral Decision Making
Court Resolution and Predictive Justice, with S. Massoni
Accepted by the French Association of Law and Economics for a submission in the European Journal of Law and Economics
Why Partial Information Is Undervalued: Cognitive and Motivational Channels in Ambiguity and Information Demand, with S. Massoni
Measuring Perceived Inequality: an Empirical Comparison of Methods, with Y. Kaouane, E. Kemel and E.W. Tchalanga
Eliciting Risk Preferences: Overcoming Probability Distortions, with M. Abdellaoui, S. Massoni and L. Page
Work in progress
Is the Gender Pay Gap Fair? Perceptions and Policy Support in Morocco, with S. Abouri (Data collection ongoing)
General population survey in Morocco on the causes and consequences of perceived inequality, with E. Kemel
Replication of Banovetz & Oprea (2023), American Economic Journal: Microeconomics, conducted within the I4Replication Lab, with A. Boufarsi, P. Crosetto, A-G. Maltese, and D. Mayaux.
Chapter in Book
Voiture autonome et perception de responsabilité, M. Chevrier & V. Teixeira (2025.)
Chapitre 5, Rapport- La régulation des voitures autonomes : groupe de travail sous la présidence de Louis Schweizer.
Le club des juristes, forthcoming.
Abstract of the papers
Choosing Between Algorithmic Forecasters: What drives Delegation?, with M. Chevrier and F. Fidanoski
Prior work on algorithmic aversion shows that people are less forgiving of algorithmic than of human errors. However, little is known about how the magnitude and frequency of errors shape delegation. We run a laboratory energy-forecasting experiment in which participants make repeated predictions and can delegate subsequent forecasts to any subset of up to nine agents whose past performance is displayed. Agent type (human vs. algorithm) and the structure of expected payoffs across agents (varying vs. fixed) are manipulated between subjects. Across treatments, delegation is primarily explained by expected payoffs; conditional on payoffs, error magnitude and frequency add little predictive power. Consistent with this, participants prefer agents who make few large errors over those who make many small errors when both yield similar expected payoffs. This result does not hold when agents’ payoffs are fixed. This asymmetry reflects a comparative evaluation effect, whereby algorithms benefit more than humans from relative performance comparisons.
Algorithm Control and Responsibility: Shifting Blame to the Programmer?, with M. Chevrier
In a laboratory experiment, we investigate whether individuals delegate allocation decisions to an intermediary in order to shift blame. Depending on the treatment, the intermediary is either a human, a rule-based algorithm (RA), or an artificial intelligence algorithm (AI). Behind these algorithms, a programmer fully controls the decisions of the RA and partially controls the decisions of the AI. We find that delegation rates do not differ across intermediary types (human, RA, or AI). Human intermediaries and RA programmers are judged more responsible for inegalitarian allocations than the respective delegators. By contrast, when the intermediary is an AI, programmers are perceived as less responsible for the AI’s inegalitarian allocations, while delegators bear most of the blame. Nevertheless, recipients choose to punish less often when the intermediary is an AI. Finally, reduced programmer control over the algorithm creates moral ``wiggle room'', leading programmers to select unequal allocations more frequently.
Court Resolution and Predictive Justice, with S. Massoni
This article examines the impact of predictive justice on litigants’ decisions in French divorce cases. Court backlog is a persistent issue, and predictive justice tools aim to facilitate out-of-court settlements by informing litigants about the likely judicial outcome. We conducted a laboratory experiment in which pairs of subjects—one representing the claimant and the other the defendant—negotiated over alimony payments across three rounds. If no agreement was reached, the case was decided by a judge and legal costs were imposed. Depending on the treatment, one or both parties received information from a predictive justice algorithm. Contrary to expectations, providing information to the claimant or to both parties reduced the likelihood of reaching an out-of-court settlement, largely due to more aggressive strategic behavior. However, when settlements did occur, the alimony amounts were closer to those that the judge would have awarded, suggesting that predictive justice can improve the quality of agreements even as it decreases their frequency. These findings highlight a trade-off between reducing court congestion and enhancing the fairness and accuracy of negotiated outcomes.
Why Partial Information Is Undervalued: Cognitive and Motivational Channels in Ambiguity and Information Demand, with S. Massoni
The value of information under ambiguity depends not only on preferences toward uncertainty but also on how individuals perceive changes in likelihood precision. We experimentally decompose ambiguity attitudes into a motivational component—ambiguity aversion—and an information-processing component—ambiguity-generated insensitivity—and examine how these dimensions vary across sources of uncertainty and shape the valuation of information. In two incentivized experiments (252 and 247 participants) involving artificial (Ellsberg-type) and structured natural (predictive-justice) environments, we find that ambiguity aversion is source-dependent, whereas ambiguity-generated insensitivity is stable across sources. Ambiguity aversion increases willingness to pay for information that fully resolves uncertainty, particularly in artificial settings. By contrast, ambiguity-generated insensitivity systematically attenuates the perceived value of predictive information, even when such information is objectively more informative in the sense of refining the likelihood structure. These findings provide empirical evidence consistent with a dual-process mechanism linking motivational and cognitive components of ambiguity attitudes to information demand and help explain why partially informative, including algorithmic, decision aids may be undervalued in practice.
Measuring Perceived Inequality: an Empirical Comparison of Methods, with Y. Kaouane, E. Kemel and E-W. Tchalanga
Beyond the reality of income inequality, its perception also matters. Several methods have been developed for the measurement of perceived inequality, and mixed results are observed regarding (i) the comparison between perceived and real inequality, and (ii) its power to predict redistribution preferences. This study compares five quantitative methods for measuring perceived income distributions, including a novel method adapted from the measurement of beliefs in behavioral economics. We assess their consistency, accuracy, and ability to predict preferences for redistribution . The methods are implemented in an incentivized and choice-based within-subjects experiment. Notably, our subjects are from Morocco, and they considered inequality and redistribution in France, thereby taking a spectator perspective. Following a pre-registered plan, we use econometric methods to estimate perceived Gini indices. This allows us to test the calibration and consistency of perceived inequality between methods as well as its power to predict redistributive choices. The different methods give consistent results regarding the perception of average income, and more heterogeneous results regarding the perceived Gini. The method based on the elicitation of histograms provides the best results according to our comparisons.