I work on how science ought to be conducted to produce evidence that best facilitates decision-making under uncertainty in policy or private contexts. To date, most of my research discusses how scientists could use effect sizes of tested interventions to inform rational decision-making. I also have ongoing interests in how scientists can produce evidence for long-run decision-making. More recently, I have started to work on how uncertainty attitudes influence scientific knowledge, and on which causal properties scientists should study to facilitate rational decision-making.
Australasian Journal of Philosophy (forthcoming)
We often want to choose the intervention that most effectively achieves our goals. But what are the most effective interventions? Here, we often turn to advice from scientists who report the effectiveness of tested interventions, using the mean difference: the difference in expected outcomes given a tested intervention and given a control intervention. I identify a problem with this practice: even perfectly accurate mean differences omit information about how interventions change the probability of an outcome of interest, and this omitted information can be vital for decision-makers to choose the best intervention—the one they would rationally choose if they were to learn all the information available in studies about how the interventions change the probability distribution over an outcome of interest. These interventions’ mean differences omit some of this information and, in doing so, may fail to inform the agent sufficiently to identify the best intervention. That is the bad news. The good news is: I identify sufficient conditions for mean differences to inform rational agents sufficiently (specifically, those agents abiding by expected utility theory or risk-weighted expected utility theory). Based on my results, I advise how researchers and decision-makers can use mean differences to facilitate rational decision-making.
Philosophy of Science (2023)
Absolute and relative outcome measures measure a treatment’s effect size, purporting to inform treatment choices. I argue that absolute measures are at least as good as, if not better than, relative ones for informing rational decisions across choice scenarios. Specifically, this dominance of absolute measures holds for choices between a treatment and a control group treatment from a trial and for ones between treatments tested in different trials. This distinction has hitherto been neglected, just like the role of absolute and baseline risks in decision-making that my analysis reveals. Recognizing both aspects advances the discussion on reporting outcome measures.
Erasmus Student Journal of Philosophy (2021)
Thomas Rowe and Alex Voorhoeve (2018) extend a pluralist egalitarian theory of distributive justice to decisions under so-called severe uncertainty. The proposed extension allows for trading off the pluralist egalitarian concern of avoiding inequality (inequality aversion) with an attitude many people seem to exhibit, namely the concern to avoid severe uncertainty (uncertainty aversion). In this paper, I argue that the proposed pluralist egalitarianism should be adjusted for situations in which uncertainty aversion conflicts with inequality aversion. When deciding what to do in such cases, a concern to avoid uncertainty should not be considered. Instead, even uncertainty averse people ought to choose as if they were not uncertainty averse. They may then decide depending on how inequality averse they are.
co-authored with Nicholas Makins
under review (e-mail me for a draft)
Synopsis: We prove that both absolute and relative effect sizes can fail to sufficiently inform almost any expected utility maximizer (i.e. any whose utility function is not additively separable in outcomes measured in trials and other aspects of the world they care about). We argue that this result motivates a context-sensitive approach to using absolute and/or relative effect sizes in evidence-based decision-making.
complete manuscript (e-mail me for a draft)
Synopsis: I prove that standardising mean differences risks stripping away information that rational decision-makers care about, even when facing measurement uncertainty. I also prove that this risk can be avoided in two kinds of scenarios. To establish these results, I develop a model for rational updating under measurement uncertainty. These findings advance ongoing debates in statistics on when researchers should standardise mean differences.
in progress (e-mail me for slides)
Synopsis: I argue that using effect sizes to inform decision-makers often risks providing fewer opportunities to gain decision-relevant knowledge for already disadvantaged people, thereby treating people unfairly.
in progress (e-mail me for slides)
Synopsis: I argue that research on the long-run effects of interventions in the biomedical and social sciences often leaves evidential gaps on how causal effects persist. Yet, understanding how causal effects persist is vital for good long-run decision-making. To argue for this view, I develop an account of which properties of causes facilitate persistent effects.
in progress
Synopsis: I draw on the concept of almost stochastic dominance from economics to prove a set of formal results which allow us to identify scenarios in which (different) effect sizes inform almost everyone in a group of rational decision-makers, except for very unusual agents. I argue that these results open up new avenues for justifying the use of effect sizes in evidence-based decision-making, even in scenarios in which effect sizes can omit information for some decision-makers: a second-best approach to rational effect size measurement.