Primary research interests:
ethics of technology; data and information ethics; AI ethics
climate ethics
philosophy of action/moral psychology
social and political philosophy
Publications
Weakness of Political Will, in Journal of Ethics and Social Philosophy
Abstract: In this paper I defend an analogy between the motivational failings of individuals and those of collective political entities. I use theories from the weakness of will literature to develop a model of the same phenomena as it occurs in political entities and to articulate an account of political weakness of will, or political akrasia. Weakness of political will is a distinctly political concept that will apply to group agents such as governments and other political collectives. I argue that understanding weakness of political will as an analogous problem to that of weakness of will in individuals enables us to better explain why political entities often fail to do what would seem to be best.
(Some) Algorithmic Bias as Institutional Bias, in Ethics and Information Technology
Abstract: In this paper I argue that some examples of what we label ‘algorithmic bias’ would be better understood as cases of institutional bias. Even when individual algorithmic systems appear unobjectionable, they may produce biased outcomes given the way that they are embedded in the background structure of our social world. Therefore, the problematic outcomes associated with the use of AI systems cannot be understood or accounted for without a kind of structural account. Understanding algorithmic bias as institutional bias in particular (as opposed to other structural accounts) has at least two important upshots. First, I argue that the existence of bias that is intrinsic to certain institutions (whether algorithmic or not) suggests that at least in some cases, the algorithms now substituting as pieces of institutional norms or rules are not “fixable” in the relevant sense, because the institutions they help make up are not fixable. Second, I argue that in other cases, changing the algorithms being used within our institutions (rather than getting rid of them entirely) is essential to changing the background structural conditions of our society.
Works in Progress
AI, LLMs, and the Normativity of Belief (under review)
Abstract: Whether or not large language models (LLMs) can be said to have representational attitudes like beliefs (or motivational attitudes like intentions) remains an open question. In this paper I argue that on some commonly accepted views about belief, LLMs, given their structure, are not capable of having beliefs. To do so, I draw from the normativity of belief literature to distinguish three types of views about the kinds of things beliefs are. The first category of view includes those which deny that there are any norms of belief at all, in any sense. The second category of views are those which allow that there might exist, in some way or another, a norm of belief, but hold that this is only true in a purely conceptual sense. I will call these the mere constitutivist accounts. The third category of views are any that suggest a stronger normative picture for the norm of belief. I’ll refer to these views as normativism about belief. My hope is to show that if either the mere constituivist or normativist accounts about belief are correct, then LLMs are not capable of having attitudes like beliefs or intentions.
Bad Art, Fake News, Digital Clutter: Skepticism about generated content (draft in progress)
In this paper, my aim is to outline a worry about generative AI. The worry is roughly this: the mass proliferation of AI-generated content (e.g. images, articles, etc.), against a backdrop of general epistemic excess, forces us to adopt a kind of skeptical stance with respect to the content we come across. I'll begin by stating the worry and explaining the conditions that would need to hold in order for the worry to be justified: (1) that we have trouble distinguishing AI-generated content from human-generated content, and (2) that we have good reason for being skeptical of AI-generated content. I'll also suggest that (1) is fairly easy to demonstrate empirically. In order to defend (2), I'll borrow from C. Thi Nguyen’s notion of trust in aesthetic sincerity to argue that we have reason to trust and engage with content that results from action guided by aesthetic considerations. Further, I will argue that the idea of being guided by considerations requires a particular kind of norm-governed, intentional agency that involves being able to coordinate plan states over time.