Reasoning transparency relates to our ability to be as clear as possible about what led to our decisions and conclusions. It is closely related to concepts from open research, such as research transparency. We strive for reasoning transparency because it enables our audience to trust and critique our conclusions by making our sources of information, decisions about methods, levels of uncertainty, and general approach to the evidence at hand explicit.
Reasoning transparency often goes undefined by actors who use the concept. We like this definition and rationale from Effective Thesis:
“Reasoning transparency as a research skill is your ability to make your research clear and explicit in its reasoning and conclusions, so that others can more easily understand what you did to reach your key takeaways and how to integrate those takeaways into their own thinking. Reasoning transparency, like archival research or data analysis, is a skill that academics and researchers can cultivate in order to contribute to their fields.” (Effective Thesis, n.d., para. 1)
Those with a background in academic research will notice the overlap with concepts like open research and research transparency. Here’s a definition of research transparency from the University of Manchester’s Office for Open Research (n.d.):
“Research transparency encompasses a range of open practices, including registering studies, sharing study data, and publicly reporting research findings. Researchers are encouraged to adopt transparent and responsible practices to improve research integrity and the trustworthiness of scientific findings.” (para. 2)
The two concepts are intertwined and could perhaps be used interchangeably. We prefer reasoning transparency because it is more encompassing. Open research is often (not always) around the production of primary research (see, for instance, this statement on transparency in research from University College London). Reasoning transparency can be thought of as including primary research but also affecting how we as researchers describe our processes for secondary research and our reasons for making decisions and judgments.
The test for whether a researcher displays reasoning transparency (in their written work, presentation, or simply a comment during a meeting) can’t be entirely objective and standardized. There are simply too many contextual factors at play (e.g., what information it is important to communicate, how much time the individual has, what the audience already knows, etc.). However, a blog post from Open Philanthropy provides some of the best advice we could find (Muehlhauser, 2017). Key insights from the article are outlined below.
Always open with summaries. They help the audience understand your main points and prepare them for what’s coming. Extra brownie points if the summary links to where you expand on those points (Muehlhauser, 2017, section 3.1).
When you make a claim or have made a decision that affects how you researched things, tell the audience what weighed most heavily in your mind, pushing you to make those calls. An example from Muehlhauser (2017) here helps: “Some of my earlier Open Philanthropy Project reports don’t do this well. E.g., my carbs-obesity report doesn’t make it clear that the evidence from randomized controlled trials (RCTs) played the largest role in my overall conclusions.” (section 3.2)
Claims are always made with a degree of confidence in mind, regardless of whether you show this to the audience or not. Telling who you are addressing and how confident you are in your assessments helps them understand where the uncertainties in the research lie and how confident they can be in your conclusions. It is, therefore, good to suggest how confident you are in something (e.g., saying you are 80% sure that your dog has a happy life or that it is very likely your dog has a happy life) and what types of evidence you have for each claim.
This last point on stating degrees of confidence deserves further clarification about words of estimative probability. Words of estimative probability are those like “highly likely” – they tell us the probability of something. The intelligence community noted decades ago that when these are used, it is often the case that people have different actual probabilities in mind (what is highly likely to you? 60%? 80%?) (Kent, 1964). To avoid miscommunication across individuals and help calibrate things, organizations or individuals will sometimes clarify what certain words mean in probability; here’s what we use at AIM (although we are not great at always sticking to it).
Likewise, we try to avoid uninformative words, sometimes called weasel words.
This chapter’s core material goes in-depth on reasoning transparency, providing good actionable recommendations to incorporate into your practice.
Finally, here are some guiding questions we use when evaluating the reasoning transparency of a piece of work.
Does the author provide sufficient information to the reader about the sources of information leading to their inferences?
Does the author provide reasons for their decisions?
Where possible, does the author cite evidence and information that supports the inference?
Is the author clear about the relative importance of the inference for the aims of the deliverable?
Does the author provide sufficient information to the reader about the sources of information leading to their factual claims?
Are those accompanied by relevant sources?
When presenting results from studies, are the studies appropriately contextualized?
When presenting results from studies, are the results presented with the right accompanying statistical information (n, p values, confidence intervals, etc.)?
When handling sources of data, are these appropriately cited?
When handling sources of data, does the author present the relevant context of data gathering and potential limitations of the source?
Has the author written in plain language, easy to understand for someone without subject expertise? (When jargon had to be used, did the author explain it?)
Does the author clearly express their uncertainties, suggesting how relevant these uncertainties are for decision-making
Control+f for any times someone used “think” or “maybe.” Is there a transparent and more concrete way of presenting uncertainty?
Is the author clear about the relative importance of their uncertainty for the aims of the deliverable?
Does the author indicate degrees of confidence, where possible quantifying these
Is the product structured in a way that allows for easy reading?
Are there unconnected bullet points, or formatting use that make it very difficult to give feedback, etc.
Does the author recognize and evaluate competing evidence for facts and inferences?
Reasoning Transparency (Muehlhauser, 2017)
Strong opinions, weakly held (Thunk, 2020)
Words of estimative probability (Kent, 1964) (if you want a cleaner look, this version may be easier to read)
Verbal probabilities: Very likely to be somewhat more confusing than numbers (Wintle et al., 2019)
This checklist summarizes the core advice in this module.
Practice project and samples in our full PDF version.