Philosophy of Science (2023)
Abstract: Absolute and relative outcome measures measure a treatment’s effect size, purporting to inform treatment choices. I argue that absolute measures are at least as good as, if not better than, relative ones for informing rational decisions across choice scenarios. Specifically, this dominance of absolute measures holds for choices between a treatment and a control group treatment from a trial and for ones between treatments tested in different trials. This distinction has hitherto been neglected, just like the role of absolute and baseline risks in decision-making that my analysis reveals. Recognizing both aspects advances the discussion on reporting outcome measures.
Erasmus Student Journal of Philosophy (2021)
Abstract: Thomas Rowe and Alex Voorhoeve (2018) extend a pluralist egalitarian theory of distributive justice to decisions under so-called severe uncertainty. The proposed extension allows for trading off the pluralist egalitarian concern of avoiding inequality (inequality aversion) with an attitude many people seem to exhibit, namely the concern to avoid severe uncertainty (uncertainty aversion). In this paper, I argue that the proposed pluralist egalitarianism should be adjusted for situations in which uncertainty aversion conflicts with inequality aversion. When deciding what to do in such cases, a concern to avoid uncertainty should not be considered. Instead, even uncertainty averse people ought to choose as if they were not uncertainty averse. They may then decide depending on how inequality averse they are.
We often want to choose the intervention that most effectively achieves our goals. But what are the most effective interventions? Here, we often turn to advice from scientists who report the effectiveness of tested interventions, using these interventions' mean differences: the difference in expected outcomes given a tested intervention and given a control intervention. I identify a problem with this practice: even perfectly accurate mean differences omit information about how interventions change the probability of an outcome of interest, and this omitted information can be vital for people to choose the best intervention. The best intervention for an agent is the one they would rationally choose if they had the chance to learn all the information available in studies about how the interventions at stake change the probability distribution over an outcome they are interested in. These interventions' mean differences omit some of this information and, in doing so, may fail to inform the agent sufficiently to identify the best intervention. That is the bad news. The good news is: I will show when mean differences inform rational agents sufficiently. Based on my results, I advise on how researchers and decision-makers can use mean differences whilst sufficiently informing rational decision-making.
Biomedical researchers often quantify how effective tested interventions are using relative or absolute effect sizes, and these effect sizes inform decision-making between interventions, for instance, in clinical settings. Contra Sprenger&Stegenga (2017) and Jäntgen (2023), we show that absolute and relative effect sizes may fail to inform any risk-sensitive rational agent sufficiently about the interventions' effectiveness to decide which intervention is best, and relative effect sizes even any rational agent who is not at all risk-sensitive. These results hold for the standard view of rational decision-making, expected utility theory. In short, absolute and relative effect sizes may fail to sufficiently inform expected utility maximisers. Our results set the ground for a much-needed debate on which effect sizes sufficiently inform which agents maximising expected utility. We need this debate to make progress on how if at all, to use absolute and/or relative effect sizes in evidence-based medicine.
In applied research areas, scientists often report the effect sizes of tested interventions to decision-makers. For continuous outcome variables, scientists often report an intervention’s effect size using the mean difference or using the standardised mean difference. Researchers typically standardise mean differences because they are uncertain about how the measures used in different studies measure an outcome of interest. I argue that this practice is risky: Even when facing measurement uncertainty, reporting standardised mean differences risks omitting information that matters for rational decision-makers, information they could learn from the mean differences and standard deviations researchers could instead report. However, I also show that researchers can avoid this risk while still reporting standardised mean differences: In either of two scenarios, for at least some rational agents, standardising mean differences does not strip away information that matters to them -- scenarios that also allow researchers to interpret and compare standardised mean differences. To establish these results, I develop a model for how agents can rationally learn from mean differences when facing measurement uncertainty.