Research

Current Status: Working Paper

Additional Material: Link to preanalysis plan

Designing Algorithmic Recommendations to Achieve Human–AI Complementarity 

with Jann Spiess

arXiv link

Abstract: Algorithms often assist, rather than replace, human decision-makers. However, these algorithms typically address the problem the decision-maker faces without modeling how their outputs cause the human to take different decisions. This discrepancy between the design and role algorithmic assistants becomes particularly apparent in light of empirical evidence that suggests algorithmic assistants often fail to improve human decisions. In this article, we formalize the design of recommendation algorithms which assist human decision-makers without making restrictive assumptions about how they use these recommendations. We formulate an algorithmic design problem that leverages the potential-outcomes framework from causal inference to model the effect of recommendations on a human’s binary treatment choice. We introduce a monotonicity assumption that gives intuitive structure to the feasible responses the human could have to the recommendation. Under this monotonicty assumption, we can express the human’s response to an algorithmic recommendations in terms of their compliance with the algorithm and the decision they would take if unassisted, both of which can be estimated from the human’s decision data. We showcase our framework using data from an online hiring experiment to explain why subjects that received a recommendation which complemented the structure of their private information outperformed counterparts who received the optimal decision algorithm as a recommendation.

Current Status: Working Paper

(previously accepted, presented, and published [as extended abstract] at EC’23)

Additional Material: N/A

Algorithmic Assistance with Recommendation-Dependent Preferences  

with Jann Spiess

arXiv link

Abstract: When we use algorithms to produce risk assessments, we typically think of these predictions as providing helpful input to human decisions, such as when risk scores are presented to judges or doctors. But when a decision-maker obtains algorithmic assistance, they may not only react to the information. The decision-maker may view the input of the algorithm as recommending a default action, making it costly for them to deviate, such as when a judge is reluctant to overrule a high-risk assessment of a defendant or a doctor fears the consequences of deviating from recommended procedures. In this article, we consider the effect and design of algorithmic recommendations when they affect choices not just by shifting beliefs, but also by altering preferences. We motivate our model from institutional factors, such as a desire to avoid audits, as well as from well-established models in behavioral science that predict loss aversion relative to a reference point, which here is set by the algorithm. We show that recommendation-dependent preferences create inefficiencies where the decision-maker is overly responsive to the recommendation. As a potential remedy, we discuss algorithms that strategically withhold recommendations, and show how they can improve the quality of final decisions.

Current Status: Working Paper

(previously accepted, presented, and published [as extended abstract] at FAccT’22)

Additional Material: Link to preanalyis plan

On the Fairness of Machine-Assisted Human Decisions

with Talia Gillis and Jann Spiess

arXiv link

Abstract: When machine-learning algorithms are deployed in high-stakes decisions, we want to ensure that their deployment leads to fair and equitable outcomes. This concern has motivated a fast-growing literature that focuses on diagnosing and addressing disparities in machine predictions. However, many machine predictions are deployed to assist in decisions where a human decision-maker retains the ultimate decision authority. In this article, we therefore consider in a formal model and in a lab experiment how properties of machine predictions affect the resulting human decisions. In our formal model of statistical decision-making, we show that the inclusion of a biased human decision-maker can revert common relationships between the structure of the algorithm and the qualities of resulting decisions. Specifically, we document that excluding information about protected groups from the prediction may fail to reduce, and may even increase, ultimate disparities. In the lab experiment, we demonstrate how predictions informed by gender-specific information can reduce average gender disparities in decisions. While our concrete theoretical results rely on specific assumptions about the data, algorithm, and decision-maker, and the experiment focuses on a particular prediction task, our findings show more broadly that any study of critical properties of complex decision systems, such as the fairness of machine-assisted human decisions, should go beyond focusing on the underlying algorithmic predictions in isolation. 

Current Status: Published by The Electronic Journal of Combinatorics in 2022 link

Additional Material: N/A

On Distinct Distances Between a Variety and a Point Set 

with Mohamed Omar

arXiv link

Abstract: We consider the problem of determining the number of distinct distances between two point sets in the real plane where one point set of size m lies on a real algebraic curve of fixed degree r, and the other point set of size n is arbitrary.  We generalize lower bounds formulated Pohoata and Sheffer to a much looser set of restrictions on the point set arrangement. This complements the work of Pach and de Zeeuw.