Research

Grants

AIM-AHEAD Consortium Development Projects to Advance Health Equity

Role: Co-Investigator (PI: Laura Brandt)

Grant Title: Towards a framework for algorithmic bias and fairness in predicting treatment outcomes for opioid use disorder

Funded by: National Institutes of Health’s Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD)

Books

Kantian Ethics and the Attention Economy with Timothy Aylsworth Palgrave Macmillan (forthcoming) 

The problematic use of technologies like smartphones threatens our autonomy in a variety of ways, and critics have only begun to appreciate the vast scope of this problem. In the last decade, we have seen a flurry of books making “self-help” arguments about how we could live happier, more fulfilling lives if we were less addicted to our phones. But none of these authors see this issue as one involving a moral duty to protect our autonomy. In this book, we draw on the deep well of Kantian ethics to argue that we have moral duties, both to ourselves and to others, to protect our autonomy from the threat posed by the problematic use of technology. 

Algorithms & Autonomy with Alan Rubel and Adam Pham Cambridge University Press (2021)

Algorithms influence every facet of modern life: criminal justice, education, housing, entertainment, elections, work...the list goes on. Delegating important decisions to machines gives rise to deep moral concerns about responsibility, transparency, freedom, fairness, and democracy. This book examines those issues by connecting them to the central human value of autonomy.   

Reviewed in Journal of the Association for Information Science and Technology: https://asistdl.onlinelibrary.wiley.com/doi/full/10.1002/asi.24651 

Articles

Does Predictive Sentencing Make Sense? with Alan Rubel and Lindsey Schwartz Inquiry (forthcoming)

This paper examines the practice of using predictive systems to lengthen the prison sentences of convicted persons when the systems forecast a higher likelihood of re-offense or re-arrest. There has been much critical discussion of technologies used for sentencing, including questions of bias and opacity. However, there hasn’t been a discussion of whether this use of predictive systems makes sense in the first place. We argue that it does not by showing that there is no plausible theory of punishment that supports it.

Capturing Drug Use Patterns at a Glance: An n-Ary Word Sufficient Statistic for Repeated Univariate Categorical Values with Gabriel Odom et al. PLoS ONE (2023)

The efficacy of treatments for substance use disorders (SUD) is tested in clinical trials in which participants typically provide urine samples to detect whether the person has used certain substances via urine drug screenings (UDS). UDS data form the foundation of treatment outcome assessment in the vast majority of SUD clinical trials. However, existing methods to calculate treatment outcomes are not standardized, impeding comparability between studies and prohibiting reproducibility of results.

The Fair Chances in Algorithmic Fairness: A Response to Holm with Michele Loi Res Publica 29, 231–237 (2023)

Sune Holm (2022) argues that a class of algorithmic fairness measures, that he refers to as the “performance parity criteria,” can be understood as applications of John Broome’s Fairness Principle. We argue that the performance parity criteria cannot be read this way. This is because in the relevant context, the Fairness Principle requires the equalization of actual individuals’ individual-level chances of obtaining some good (such as an accurate prediction from a predictive system), but the performance parity criteria do not guarantee any such thing: the measures merely ensure that certain population-level ratios hold. 

Egalitarian Machine Learning with David O'Brien and Ben Schwan Res Publica 29, 237–264 (2023)

Prediction-based decisions, which are often made by utilizing the tools of machine learning, influence nearly all facets of modern life. Ethical concerns about this widespread practice have given rise to the field of fair machine learning and a number of fairness measures, mathematically precise definitions of fairness that purport to determine whether a given prediction-based decision system is fair. Following Reuben Binns, we take “fairness” in this context to be a placeholder for a variety of normative egalitarian considerations. We explore a few fairness measures to suss out their egalitarian roots and evaluate them, both as formalizations of egalitarian ideas and as assertions of what fairness demands of predictive systems. We pay special attention to a recent and popular fairness measure, counterfactual fairness, which holds that a prediction about an individual is fair if it is the same in the actual world and any counterfactual world where the individual belongs to a different demographic group.

Just Machines Public Affairs Quarterly (2022) 36 (2): 163–183 

A number of findings in the field of machine learning have given rise to questions about what it means for automated scoring- or decision- making systems to be fair. One center of gravity in this discussion is whether such systems ought to satisfy classification parity (which requires parity in accuracy across groups, defined by protected attributes) or calibration (which requires similar predictions to have similar meanings across groups, defined by protected attributes). Central to this discussion are impossibility results, which show that classification parity and calibration are often incompatible. This paper aims to argue that classification parity, calibration, and a newer, interesting measure called counterfactual fairness are unsatisfactory measures of fairness, offer a general diagnosis of the failure of these measures, and sketch an alternative approach to understanding fairness in machine learning.

On the Duty to be an Attention Ecologist with Tim Aylsworth Philosophy & Technology 35 (1): 1-22. 2022

The attention economy — the market where consumers’ attention is exchanged for goods and services — poses a variety of threats to individuals’ autonomy , which, at minimum, involves the ability to set and pursue ends for oneself. It has been argued that the threat wireless mobile devices pose to autonomy gives rise to a duty to oneself to be a digital minimalist, one whose interactions with digital technologies are intentional such that they do not conflict with their ends. In this paper, we argue that there is a corresponding duty to others to be an attention ecologist, one who promotes digital minimalism in others. Although the moral reasons for being an attention ecologist are similar to those that motivate the duty to oneself, the arguments diverge in important ways. We explore the application of this duty in various domains where we have special obligations to promote autonomy in virtue of the different roles we play in the lives of others, such as parents and teachers. We also discuss the consequences of our arguments for employers, software developers, and policy makers.

Is There a Duty to Be a Digital Minimalist? with Tim Aylsworth Journal of Applied Philosophy 38: 662-673. 2021

The harms associated with wireless mobile devices (e.g., smartphones) are well documented. They have been linked to anxiety, depression, diminished attention span, sleep disturbance, and decreased relationship satisfaction. Perhaps what is most worrying from a moral perspective, however, is the effect these devices can have on our autonomy. In this paper, we argue that there is an obligation to foster and safeguard autonomy in ourselves, and we suggest that wireless mobile devices pose a serious threat to our capacity to fulfill this obligation. We defend the existence of an imperfect duty to be a “digital minimalist.” That is, we have a moral obligation to be intentional about how and to what extent we use these devices. The empirical findings already justify prudential reasons in favor of digital minimalism, but the moral duty is distinct from and independent of prudential considerations. 

What Should We Agree on about the Repugnant Conclusion? with Stéphane Zuber et al. Utilitas 33(4), 379-383. 2021

The Repugnant Conclusion is an implication of some approaches to population ethics. It states, in Derek Parfit’s original formulation, 

For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better, even though its members have lives that are barely worth living. (Parfit 1984: 388) 

This conclusion has been the subject of several formal proofs of incompatibility in the literature (Ng 1989; Arrhenius 2000, forthcoming) and has been an enduring focus of population ethics. 

The Repugnant Conclusion served an important purpose in catalyzing and inspiring the pioneering stage of population ethics research. We believe, however, that the Repugnant Conclusion now receives too much focus. Avoiding the Repugnant Conclusion should no longer be the central goal driving population ethics research, despite its importance to the fundamental accomplishments of the existing literature.

Is the Attention Economy Noxious? with Adam Pham Philosophers' Imprint 20 (17): 1-13. 2020

A growing amount of media is paid for by its consumers through their very consumption of it. Typically, this new media is web-based and paid for by advertising. It includes the services offered by Facebook, Instagram, Snapchat, and YouTube. We offer an ethical assessment of the attention economy, the market where attention is exchanged for new media. We argue that the assessment has ethical implications for how the attention economy should be regulated. To conduct the assessment, we employ two heuristics for evaluating markets. One is the “harm” criterion, which relates to whether the market tends to engender extremely harmful outcomes for individuals or society as a whole. The other is the “agency” criterion, which relates not to the outcomes of the market, but rather, to whether it somehow reflects or has its source in weakened agency. We argue that the attention economy animates concerns with respect to both criteria and that new media should be subject to the same sort of regulation as other harmful, addictive products. 

Algorithms, Bias, and the Importance of Agency with Adam Pham and Alan Rubel Social Theory and Practice 46 (3): 547-572. 2020. 

Algorithmic systems and predictive analytics play an increasingly important role in various aspects of modern life. Scholarship on the moral ramifications of such systems is in its early stages, and much of it focuses on bias and harm. This paper argues that in understanding the moral salience of algorithmic systems it is essential to understand the relation between algorithms, autonomy, and agency. We draw on several recent cases in criminal sentencing and K–12 teacher evaluation to outline four key ways in which issues of agency, autonomy, and respect for persons can conflict with algorithmic decision-making. Three of these involve failures to treat individual agents with the respect they deserve. The fourth involves distancing oneself from a morally suspect action by attributing one’s decision to take that action to an algorithm, thereby laundering one’s agency. 

Agency Laundering and Information Technologies with Alan Rubel and Adam Pham Ethical Theory and Moral Practice 22 (4): 1017-1041. 2019.

When agents insert technological systems into their decision-making processes, they can obscure moral responsibility for the results. This can give rise to a distinct moral wrong, which we call “agency laundering.” At root, agency laundering involves obfuscating one’s moral responsibility by enlisting a technology or process to take some action and letting it forestall others from demanding an account for bad outcomes that result. We argue that the concept of agency laundering helps in understanding important moral problems in a number of recent cases involving automated, or algorithmic, decision-systems. We apply our conception of agency laundering to a series of examples, including Facebook’s automated advertising suggestions, Uber’s driver interfaces, algorithmic evaluation of K-12 teachers, and risk assessment in criminal sentencing. We distinguish agency laundering from several other critiques of information technology, including the so-called “responsibility gap,” “bias laundering,” and masking. 

What's Wrong with Machine Bias Ergo: An Open Access Journal of Philosophy 6. 2019.

Data-driven, decision-making technologies used in the justice system to inform decisions about bail, parole, and prison sentencing are biased against historically marginalized groups (Angwin, Larson, Mattu, & Kirchner 2016). But these technologies’ judgments—which reproduce patterns of wrongful discrimination embedded in the historical datasets that they are trained on—are well-evidenced. This presents a puzzle: how can we account for the wrong these judgments engender without also indicting morally permissible statistical inferences about persons? I motivate this puzzle and attempt an answer. 

The moral limits of the market: the case of consumer scoring data with Adam Pham Ethics and Information Technology 21 (2): 117-126. 2019.

We offer an ethical assessment of the market for data used to generate what are sometimes called “consumer scores” (i.e., numerical expressions that are used to describe or predict people’s dispositions and behavior), and we argue that the assessment has ethical implications on how the market for consumer scoring data should be regulated. To conduct the assessment, we employ two heuristics for evaluating markets. One is the “harm” criterion, which relates to whether the market produces serious harms, either for participants in the market, for third parties, or for society as a whole. The other is the “agency” criterion, which relates to whether participants understand the nature and significance of the exchanges they are making, if they can be guaranteed fair representation, or if there is differential need for the market’s good. We argue that consumer scoring data should be subject to the same sort of regulation as the older FICO credit scores. Although the movement in the 1990s that was aimed at regulating the FICO scores was not aimed at restraining a market per se, we argue that the reforms were underwritten by concerns about the same sorts of problems as those outlined by our heuristics. Therefore, consumer data should be subject to the same sort of regulation. 

The imprecise impermissivist’s dilemma with Casey Hart Synthese 196 (4): 1623-1640. 2019.

Impermissivists hold that an agent with a given body of evidence has at most one rationally permitted attitude that she should adopt towards any particular proposition. Permissivists deny this, often motivating permissivism by describing scenarios that pump our intuitions that the agent could reasonably take one of several attitudes toward some proposition. We criticize the following impermissivist response: while it seems like any of that range of attitudes is permissible, what is actually required is the single broad attitude that encompasses all of these single attitudes. While this might seem like an easy way to win over permissivists, we argue that this impermissivist response leads to an indefensible epistemology; permissive intuitions are not so easily co-opted. 

Ideal counterpart theorizing and the accuracy argument for probabilism with Olav Vassend Analysis 78 (2): 207-216. 2018.

One of the main goals of Bayesian epistemology is to justify the rational norms credence functions ought to obey. Accuracy arguments attempt to justify these norms from the assumption that the source of value for credences relevant to their epistemic status is their accuracy. This assumption and some standard decision-theoretic principles are used to argue for norms like Probabilism, the thesis that an agent’s credence function is rational only if it obeys the probability axioms. We introduce an example that shows that the accuracy arguments for Probabilism given by Joyce and Pettigrew fail, and that Probabilism in fact turns out to be false given Pettigrew’s way of conceiving of the goal of having accurate credences. Finally, we use our discussion of Pettigrew’s framework to draw an important general lesson about normative theorizing that relies on the positing of ideal agents. 

Book Chapters

Social Media, Emergent Manipulation, and Political Legitimacy with Adam Pham and Alan Rubel in F. Jonepier and M. Klenk (Eds.) Manipulation Online (Routledge). 2022.

Psychometrics firms such as Cambridge Analytica (CA) and troll factories such as the Internet Research Agency (IRA) have had a significant effect on democratic politics, through narrow targeting of political advertising (CA) and concerted disinformation campaigns on social media (IRA). It is natural to think that such activities manipulate individuals and, hence, are wrong. Yet, as some recent cases illustrate, the moral concerns with these activities cannot be reduced simply to the effects they have on individuals. Rather, we will argue, the wrongness of these activities relates to the threats they present to the legitimacy of political orders. This occurs primarily through a mechanism we call “emergent manipulation,” rather than through the sort of manipulation that involves specific individuals. 

Epistemic Paternalism Online with Alan Rubel and Adam Pham in Guy Axtell & Amiel Bernal (eds.), Epistemic Paternalism, Rowman & Littlefield. pp. 29-44. 2020.

New media (highly interactive digital technology for creating, sharing, and consuming information) affords users a great deal of control over their informational diets. As a result, many users of new media unwittingly encapsulate themselves in epistemic bubbles (epistemic structures, such as highly personalized news feeds, that leave relevant sources of information out (Nguyen forthcoming)). Epistemically paternalistic alterations to new media technologies could be made to pop at least some epistemic bubbles. We examine one such alteration that Facebook has made in an effort to fight fake news and conclude that it is morally permissible. We further argue that many epistemically paternalistic policies can (and should) be a perennial part of the internet information environment. 

Proceedings

Fairness and Machine Fairness with David O’Brien and Ben Schwan, Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. 2021.

The field of fair machine learning has given us a proliferation of fairness measures. These measures are often touted as capturing normative egalitarian ideals, but is it often unclear which ideals they are meant to capture and whether they in fact successfully do so. Further, there is little consensus as to which, if any, of these normative ideals best reflects what fairness requires. We provide a framework for thinking about the connection between fairness measures, their egalitarian roots, and the standards that justify their use in different contexts. Using the framework, we explore the connections between three fairness measures and three egalitarian ideals. 

Bias in Information, Algorithms, and Systems with Alan Rubel and Adam Pham In Jo Bates, Paul D. Clough, Robert Jäschke & Jahna Otterbacher (eds.), Proceedings of the International Workshop on Bias in Information, Algorithms, and Systems (BIAS) . pp. 9-13. 2018. 

We argue that an essential element of understanding the moral salience of algorithmic systems requires an analysis of the relation between algorithms and agency. We outline six key ways in which issues of agency, autonomy, and respect for persons can conflict with algorithmic decision-making. 

Agency Laundering and Algorithmic Decision Systems with Alan Rubel and Adam Pham In N. Taylor, C. Christian-Lamb, M. Martin & B. Nardi (eds.), Information in Contemporary Society (Lecture Notes in Computer Science) (Proceedings of the 2019 iConference), Springer Nature. pp. 590-598. 2019. 

This paper has two aims. The first is to explain a type of wrong that arises when agents obscure responsibility for their actions. Call it “agency laundering.” The second is to use the concept of agency laundering to understand the underlying moral issues in a number of recent cases involving algorithmic decision systems. From the Proceedings of the 14th International Conference, iConference 2019, Washington D.C., March 31-April 3, 2019. 

Reports

I served as a consultant for "Artificial Intelligence Ethics and Predictive Policing: A Roadmap for Research", authored by  Ryan Jenkins and  Duncan Purves.

This report maps key interdisciplinary and entangled issues to guide policymakers, police, and community members, and to scaffold research over the coming years. Predictive policing is the use of artificial intelligence to forecast future criminal behavior based on historical data, in use in 60+ police departments in the United States alone. The report draws upon empirical use cases and a diverse, international community of experts on technology, policing, and law. Overarching questions concerning bias, the obligations of technology companies, and society’s fundamental conception of what crime is open the report and color its subsequent insights. 

In Progress (email me for more information)

[A paper on the use of the idea of understanding fairness measures in machine learning as requiring the equalization of (certain) probabilities] with Michele Loi

[A paper arguing against algorithmically generated prison sentences] with Alan Rubel and Lindsey Schwartz

[A paper on the foundations of algorithmic fairness, from a Broomean point of view]