Australasian Journal of Philosophy (forthcoming).

Abstract: For aggregative theories of moral value, it is a challenge to rank worlds that each contain finitely many valuable events. And, although there are several existing proposals for doing so, few provide a cardinal measure of each world's value. This raises the even greater challenge of ranking lotteries over such worlds—without a cardinal value for each world, we cannot apply expected value theory. How then can we compare such lotteries? To date, we have just one method for doing so (proposed separately by Arntzenius, Bostrom, and Meacham), which is to compare the prospects for value at each individual location, and to then represent and compare lotteries by their expected values at each of those locations. But, as I show here, this approach violates several key principles of decision theory and generates some implausible verdicts. I propose an alternative—one which delivers plausible rankings of lotteries, which is implied by a plausible collection of axioms, and which can be applied alongside almost any ranking of infinite worlds.

Ethics (forthcoming). Featured in the Global Priorities Institute Working Paper Series (2020).

Abstract: Consider a decision between: 1) a certainty of a moderately good outcome, such as one additional life saved; 2) a lottery which probably gives a worse outcome, but has a tiny probability of some vastly better outcome (perhaps trillions of blissful lives created). Which is morally better? By expected value theory (with a plausible axiology), no matter how tiny that probability of the better outcome, (2) will be better than (1) as long that better outcome is good enough. But this seems fanatical. So we may be tempted to abandon expected value theory.

But not so fast - denying all such verdicts brings serious problems. For one, we must reject either: that moral betterness is transitive; or even a weak tradeoffs principle. For two, we must accept that judgements are either: ultra-sensitive to small probability differences; or inconsistent over structurally-identical pairs of lotteries. And, for three, we must sometimes accept judgements which we know we would reject if we learned more. Better to accept fanaticism than these implications.

In Virtues and Economics 4 (forthcoming).

Philosophical Studies 178.6 (2021): 1917-1949.

Abstract: How might we extend aggregative moral theories to compare infinite worlds? In particular, how might we extend them to compare worlds with infinite spatial volume, infinite temporal duration, and infinitely many morally valuable phenomena? When doing so, we face various impossibility results from the existing literature. For instance, the view we adopt can endorse the claim that (1) worlds are made better if we increase the value in every region of space and time, or (2) that they are made better if we increase the value obtained by every person. But they cannot endorse both claims, so we must choose. In this paper I show that, if we choose the latter, our view will face serious problems such as generating incomparability in many realistic cases. Opting instead to endorse the first claim, I articulate and defend a spatiotemporal, expansionist view of infinite aggregation. Spatiotemporal views such as this do face some difficulties, but I show that these can be overcome. With modification, they can provide plausible comparisons in the cases that we care about most.

PhD dissertation (2021)

Abstract: Suppose you found that the universe around you was infinite—that it extended infinitely far in space or in time and, as a result, contained infinitely many persons. How should this change your moral decision-making? Radically, it seems, according to some philosophers. According to various recent arguments, any moral theory that is ’minimally aggregative’ will deliver absurd judgements in practice if the universe is (even remotely likely to be) infinite. This seems like sound justification for abandoning any such theory.

My goal in this thesis is simple: to demonstrate that we need not abandon minimally aggregative theories, even if we happen to live in an infinite universe. I develop and motivate an extension of such theories, which delivers plausible judgements in a range of realistic cases. I show that this extended theory can overcome key objections—both old and new—and that it succeeds where other proposals do not. With this proposal in hand, we can indeed retain minimally aggregative theories and continue to make moral decisions based on what will promote the good.


Abstract: Our actions in the marketplace often harm others. For instance, buying and consuming petroleum contributes to climate change and thereby does harm. But there is another kind of harm we do in almost every market interaction: market harms. These are harms inflicted via changes to the goods and/or prices available to the victim in that market. (Similarly, market benefits are those conferred in the same way.) Such harms and benefits may seem morally unimportant, as Judith Jarvis Thomson and Ronald Dworkin have argued. But, when those harms or benefits are concentrated on the global poor, they can have considerable impacts on wellbeing. For instance, in 2007-2008, commodity traders invested heavily in wheat and other staple foods, caused a dramatic price rise, and thereby pushed 40 million people into hunger. In such cases, intuition suggests that the traders act wrongly. In this paper, I argue that market harms and benefits are morally equivalent to harms and benefits imposed through other means (contra Thomson and Dworkin). I also demonstrate that, in practice, these harms and benefits are often great in magnitude. For many common products, buying that product results in a considerable financial loss for one group and a considerable gain for another. For instance, for every $10 we spend on wheat, we cause the global poor to lose between $5 and $67 (in expectation) and the global rich to gain the same amount. In light of these effects, I argue that we have moral duties to adopt certain consumption habits.

The offsetting paradox (with Tyler M. John & Amanda Askell)

Under review

Abstract: Many real-world agents try to offset the harm their behaviours do to the world. People offset carbon emissions, river and air pollution, and even eating animal products. Offsetters typically believe that, by offsetting, they change the deontic status of their prior behaviour, making otherwise impermissible action permissible. Some philosophers have argued that they indeed do, since certain offsets reverse the adverse effects of otherwise harmful behaviour. We show that this is not so. In practice, practices such as carbon offsetting do not reverse the harms of the original behaviour, nor do they benefit the same group as was harmed. Offsetting thus raises a puzzle for moral theory. Traditional nonconsequentialist moral theories forbid actions that harm one innocent group even if they proportionately benefit another. Some theories allow this when ex ante Pareto is satisfied but, if they are to allow offsetting, these theories must deliver implausible verdicts in some cases. Moreover, any moral theory that allows offsetting faces a dilemma between allowing any wrong to be offset, no matter how grievous, and recognising an implausibly sharp discontinuity between offsettable actions and non-offsettable actions. The most plausible option available is to accept that offsetting almost never succeeds in rendering an otherwise impermissible action permissible.

Under review

Abstract: Our universe is both chaotic and (most likely) infinite in space and time. But it is within this setting that we must make moral decisions. This presents problems. The first: due to our universe's chaotic nature, our actions often have long-lasting, unpredictable effects; and this means we typically cannot say which of two actions will turn out best in the long run. The second problem: due to the universe's infinite dimensions, and infinite population therein, we cannot compare outcomes by simply adding up their total moral values - those totals will typically be infinite or undefined. Each of these problems poses a threat to aggregative moral theories. But, for each, we have solutions: a proposal from Greaves let us overcome the problem of chaos, and proposals from the infinite aggregation literature let us overcome the problem of infinite value. But a further problem emerges. If our universe is both chaotic and infinite, those solutions no longer work - outcomes that are infinite and differ by chaotic effects are incomparable, even by those proposals. In this paper, I show that we can overcome this further problem. But, to do so, we must accept some peculiar implications about how aggregation works.

Under review

Abstract: Aggregative moral theories face a series of devastating problems when we apply them in a physically realistic setting. According to current physics, our universe is likely infinitely large, and will contain infinitely many morally valuable events. But standard aggregative theories are ill-equipped to compare outcomes containing infinite total value so, applied in a realistic setting, they cannot compare any outcomes a real-world agent must ever choose between. This problem has been discussed extensively, and non-standard aggregative theories proposed to overcome it. This paper addresses a further problem of similar severity. Physics tells us that, in our universe, how remotely in time an event occurs is relative. But our most promising aggregative theories, designed to compare outcomes containing infinitely many valuable events, are sensitive to how remote in time those events are. As I show, the evaluations of those theories are then relative too. But this is absurd; evaluations of outcomes must be absolute. So we must reject such theories. Is this objection fatal for all aggregative theories, at least in a relativistic universe like ours? I demonstrate here that, by further modifying these theories to fit with the physics, we can overcome it.

Risk aversion and the not-so-long run

In preparation

How to neglect the long term

In preparation

Is the expected value of the future well-defined?

In preparation

Will the real ex ante Pareto please step forward?

In preparation

Ought a maximising consequentialist give 90%?

In preparation

Discounting the present

In preparation