Experts are sources of invaluable insight, which we rely on to focus our research, clarify doubts, and learn about a specific area of work. Different types of experts will be suitable to your research needs, for instance whether you are investigating a broad cause area or a narrowly defined intervention. Experts often have either years of experience or lived experience (or both) in a field which you can tap by carefully thinking about what to ask, and considering how their answers can be utilized for analysis.
Great researchers know how and when to get expert input. Garnering expert views consists of speaking to people with expertise on a subject. Sometimes expert views are collected systematically, such as expert surveys (Maestas, 2016). In other cases, a researcher may consult with experts more informally. Experts can often give a broad overview of a topic, allowing researchers to gain a comprehensive understanding of an idea. At AIM, we are often working on a topic for a few weeks or months and moving on to the next. Experts thus provide very valuable insight into the nuances of a subject, and help us to test our knowledge and understanding of a particular field.
Experts aren’t infallible, however. There is a skill in choosing the right people to talk to, and knowing how and when to trust their advice. One shouldn’t rely on expert opinion alone when making decisions. Expert input is thus thought of as a form of secondary evidence. In an ideal world, an expert’s views on a particular topic will be influenced by primary evidence (e.g., studies they’ve read, experiences they’ve had, arguments they’ve heard), as well as other secondary evidence (e.g., conversations with other experts and their lived experience). The best experts on a topic will be particularly good at interpreting the primary evidence and forming accurate beliefs about it. So we should expect their views to be correlated with the truth. But the primary evidence itself is often more important.
Here are some situations where we will need to rely on the claims of authorities or experts:
We can’t access or understand the primary evidence. For instance, the subject is too technical or subtle for us to follow under capacity constrains.
Adequately reviewing the primary evidence would be too resource intensive. Expert commentary can support decision-making, and point us to most relevant sources.
There are pieces of information you cannot gain from desktop research. For instance, details related to context, implementation matters, the situation on the ground, or information others are more hesitant to share publicly (e.g. programs or organizations that failed).
In practice, even if you’ve thoroughly reviewed the primary evidence, it is often still worth consulting the experts because you may have made a mistake when reviewing the evidence, which consulting the experts can help you identify.
We can divide experts into three groups: specialist, domain, and broad experts.
Specialist experts: Specialist experts have deep knowledge in a very specific context, like a single step or variable in an intervention or one approach to an intervention. These could be factual specialists (who have ‘knowledge-that’) or practitioner specialists (who have ‘knowledge-how’). These types of experts have insight into an element of the intervention but not the whole. They can provide a piece of the picture but not a broad comparison. For example, a specialist in postpartum depression would probably not be able to compare this issue to body dysmorphic disorder and may not even be willing to offer a guess on the relative burden. However, they would be able to provide highly specific information about a specific treatment style for postpartum depression that you have identified as promising.
Domain experts: Domain experts have broader knowledge than specialists, often having deep knowledge about an entire intervention or type of intervention. They might know about cage-free campaigns for chickens or corporate campaigns for animals in general. But they likely wouldn’t be able to compare chicken welfare interventions to fish welfare ones. They also might not be able to give you an overview of the biggest challenges and opportunities in the farmed animal welfare cause area more broadly. Domain experts can be highly useful to speak to after you have selected a cause area but before investing major funding into any one intervention.
Broad experts: Broad experts can provide comparisons across different domains. For example, another researcher who has evaluated half a dozen different fish organizations might have a strong sense of how the issue of disease compares to problems caused by transportation, even if she does not have the same detailed sense of specific diseases as the specialist expert. An example of a broad expert might be a researcher at an evaluation organization, such as GiveWell or the Copenhagen Consensus, or an author on the Disease Control Priority reports. However, they could also be a large impact-driven and evidence-based funder or cross-area implementer with a good sense of the space overall.
Strengths
Breadth: Consulting experts is broad in terms of the range of scenarios it’s applicable to and the breadth of information it captures. Experts have formed their views from a wide range of sources, ranging from studies to discussions with other experts and personal experience in the field. This reduces the chance that an important factor or perspective is missed. Also, experts sometimes have access to information that’s not publicly available at the time, like (a) insights from studies that won’t be released for years, (b) controversial opinions they aren’t willing to share publicly, and (c) information on what research is in the pipeline.
Can directly compare possible strategies: We can present an expert with highly specific plans and have them compare different elements much more quickly than a more formal model, like a cost-effectiveness analysis, could. If we consider three interventions in three different countries, with three different partner organizations, the number of permutations quickly becomes overwhelming for a formal model. Experts can compare multiple iterations and suggest which combination is best.
Pressure testing our specific plans: Scientific studies will provide evidence on a similar scenario to our own and require some generalization to a different location, population, or approach; however, experts can provide guidance on our specific plans while considering the exact details. They can provide guidance on our plans grounded in first-hand experience more readily than rational consideration can.
Robust against individual errors: By aggregating a number of independent perspectives, conclusions informed by expert advice are a lot less vulnerable to individual errors than conclusions based entirely on rationality (which often involve a single linear argument) or a single cost-effectiveness model. Of course, for this to work, the perspectives do need to be truly independent (i.e. not all proponents of the same narrow school of thought or students of the same dogmatic teacher).
Captures field-level convergence: Speaking to experts can give a sense of whether individuals within a field have a fairly unified view (i.e. if all five experts you speak to agree on a topic) or if there is a lot of disagreement (i.e. five experts give five different answers). Provided the experts reached their conclusions independently, then a high level of convergence can justify higher confidence in their expert view. Divergence helps you identify the crucial considerations that warrant further investigation.
Helps to direct and focus our search for evidence: Experts can often provide insight on which experts we should speak to next and which resources we should read. They can also often connect us directly to these other experts and resources. This allows us to only spend time engaging with the most useful evidence. For this reason, speaking to an expert fairly early in our decision-making process can make sense, particularly those with whom we already have relationships.
Weaknesses
Expert judgment has been found to have a significant number of weaknesses. Studies have shown that in some areas, such as predicting the future, “many of the experts did worse than random chance, and all of them did worse than simple algorithms.” These concerns limit experts’ usefulness and make us confident that they should not be the only perspective used; deference to experts should be limited.
Not all of the following concerns apply to every expert, but they are generalized and will apply to a large number:
Susceptibility to cognitive bias: A number of cognitive biases affect all humans. Experts are, in the end, just better-informed humans, so they suffer from the same biases as the rest of us. They are distinguished by their particular experience or education on a given topic, not by their ability to assess evidence impartially and avoid motivated reasoning. One mitigating factor is that if multiple experts are spoken to, their biases will not necessarily overlap, and their average quality of judgment tends to be higher than that of a single expert. Still, cognitive biases weaken expert opinions as a form of evidence. In particular:
Confirmation bias, resulting in unequal application of rigor: Experts are often far more rigorous in attempting to falsify claims that conflict with their current viewpoint than with claims that support it. In particular, experts are often anchored to the approaches they’ve used and the projects they’ve been involved with in the past and will be unduly skeptical of competing approaches.
Groupthink: This bias, caused by the desire for harmony or conformity within a group, results in group members reaching a consensus conclusion without critically evaluating alternative options. Under the influence of groupthink, group members ignore or suppress dissenting viewpoints. For example, in the case of charity ideas, if an idea has not been previously tested or considered by other experts in their field, experts will often be more inclined to dismiss the idea than they would if the same concept was presented by someone connected to their in-group. Although this is a useful heuristic for experts to use, it can make them underweight new ideas relative to more established ones.
Lack of transparency of reasoning: Experts have formed their views using a considerable number of sources and experiences. A byproduct of this is that it’s often difficult to pinpoint the exact basis for a given viewpoint. This can make it very challenging to confirm or disprove a given idea or to know how much weight it should be given.
Inconsistent and unclear epistemology: All experts have had to apply weights to different types of evidence when forming their beliefs, but few have thought explicitly about what their weights should be. Even fewer have made their epistemology public. So while experts are highly likely to have engaged deeply with the relevant evidence, they are less likely to have integrated it carefully or consistently to form a viewpoint.
Limited specificity and decisiveness: Our research team has conducted hundreds of expert interviews and found that many experts are unwilling to give specific estimates, such as a percentage-based chance of success. They are also often unwilling to make claims that could be used in other methodologies, such as CEAs, particularly if those claims cannot be anonymized. In fact, experts will often be unwilling to make general evaluations at all – for example, they might be willing to list the advantages and disadvantages of a given intervention but not actually recommend whether or not to implement it.
We defer to people all the time on different issues, whether it’s the doctor at a hospital, the weather presenter for the forecast, or a chef on how to cook a new recipe. Even in our own domains of expertise, we often trust others to tell us what the data says because it would be unnecessarily time-consuming to always pore over the data ourselves.
Knowing whom to trust is a difficult and important skill. Trust the wrong person; they can fill our heads with the wrong information. But trust no one, and we have to fix every broken bone ourselves. So how can we determine who is credible and who is not?
There are six main ways to test whether a source or person is worth putting our trust in. Each of these is more of a spot check than perfectly predictive, and not all can be done in every case. In descending order of how good an indicator it is, we can:
Test against reality: The best way to test if we can trust a person or source is to test their statements against reality. Say there are two weather forecasters; we are unsure whom to trust. In this case, a reality check is easy. We could compare each of their historical predictions with the historical weather to see who has been accurate more often. This does not guarantee who will be a better source in the future, but it’s strongly suggestive. Similarly, suppose a source predicts a certain reality, particularly in an easily falsifiable manner. In that case, this evidence can be used to support or create skepticism for its credibility.
Reality checks can also be used for groups of sources. For example, lots of people go to the hospital with broken arms and generally come out with a cast and an improved state. Thus, we might generally trust hospitals to fix broken arms, even if we have not checked the specific doctor.
Reality is the ultimate arbiter. It does not matter if one weather forecaster speaks more confidently on TV, wears a better suit, or has a PhD – the one whose predictions more closely correlate with reality is the better source. And if someone systematically makes predictions that cannot be tested against reality - that should urge us to be cautious about taking their claims or predictions at face value, as they are operating without feedback loops.
In the context of a researcher, ‘testing against reality’ looks like assessing their track record of public predictions and claims and monitoring whether the advice they give leads us in the direction that makes the most sense once we have all the facts. For example, we have checked over a dozen sections of GiveWell’s work, often putting several dozen hours of research into a specific claim. Again and again, from our best assessment, it looks like they are correct. Over time, this builds trust, so we can now use GiveWell as a reliable source to check other claims against, thus building a network of trusted sources.
Test against other sources of evidence: Not all claims can be easily tested against reality, but a large number can be tested against other sources of evidence. For example, suppose we were deciding whether to trust the advice of an expert with no public track record of claims or predictions and who is providing us advice for the first time. Suppose they predicted something we initially find counter-intuitive, like that the population of China will halve in the next 50 years. If, after looking into it, the empirical evidence suggests rapidly declining population growth and the best rational arguments suggest that we should expect such a decline, we might be more inclined to trust the expert.
Test expertise on adjacent topics: Trustworthy in one domain does not always mean trustworthy in another. Despite the hospital fixing a broken arm, we would be wary of their ability to predict the weather. However, we probably would place quite a bit of trust in their ability to treat a pet dog with a bacterial infection because this is sufficiently similar to their area of expertise. If GiveWell (a trustworthy global health charity evaluator) started recommending charities in an area that is more difficult to measure, like policy or animal welfare, we would be more inclined to trust them – even if we could not yet test their recommendation against reality or other sources of evidence.
Assess their incentives/motivations: Try to imagine reasons an expert might bend or distort the truth; establish if they have any motive or incentives to advise you in a certain direction that may not be the most impactful. For example, an expert could have the incentive to give advice that points you towards giving more grants to one intervention or fewer grants to a competing intervention if it makes it easier for them to secure ongoing support for their research into that intervention. The most trustworthy experts will be those who have nothing to lose or gain from your decisions.
Often the incentive at play will be subtle, and the experts may not even be conscious of it themselves. For example, a common form of bias that experts fall victim to is to over-prescribe the method they specialize in, far beyond the contexts it’s best suited to – in line with the saying, “If all you have is a hammer, everything looks like a nail.” The incentive here may simply be the desire to justify the decision to specialize in a given method by concluding that it’s the most useful option. Many readers will have experienced this in the context of medicine, where surgeons (to name just one example) often seem biased towards a surgical solution to a problem that could be better solved another way.
Check for references: It is always worthwhile to ask around for reviews of an expert whose work you plan on using. If other experts who have earned your trust endorse their work, they are more likely to be trustworthy.
Check for signs of credibility signaling: The last way we can try to get a sense of whom to trust is by looking at generally accepted forms of credibility signaling (e.g. an individual’s qualifications, the reputation of the institutions they belong to). This is the most common approach but the weakest single indicator. It has the advantage of being quick, but it’s also fairly unreliable compared to the other methodologies. A flashy website with lots of long-form content is a strong signal that a source has invested significant resources in it (funding or labor) but a pretty weak signal in terms of them being trustworthy. Credibility signaling is often where people go wrong with trusting a source – by giving a certain signal far too much weight compared to its actual correlation with reality.
How to speak to experts
Experts are, ultimately, just people like anyone else, so most standard conversational rules apply to them. A few elements to highlight are:
Be humble: Often, when talking to experts, they will know a lot about a field but might be pretty worried about saying anything you, as a funder, might disagree with. Come in humble and open-minded, and you will be more likely to get useful responses.
Be prepared: It’s important to be prepared when consulting an expert to show that we value their time. If they have written a whole book on a given topic, we should, at the very least, review a summary before talking to them about it. The same goes for website content they have created. In addition to reading content beforehand, you should come prepared with questions and a view on which ones to prioritize or skip if we’re running out of time.
Ask comparative questions: Few experts will have a great sense of the probability of something happening or a clear expected value for a given intervention, but they often give excellent answers to more comparative questions. For example, you’re more likely to get an answer to “Does X seem like it would cost more per person than Y?” than you are to get an exact number for either.
Give them space to think: Don’t move onto the next question immediately after they stop answering the previous one. Ensure that there is a small pause so that they can add something else if they think of it.
Ensure that they have answered your question: If you ask questions such as “What are the main strengths and weaknesses of x intervention?”, it is quite easy for them to forget the initial question once they have been talking about the strengths of the intervention for a few minutes. Follow up on this with something like “Thanks for outlining the strengths of x intervention; what do you think are the main weaknesses?”
Keep an eye out for potential mentors/connections: When speaking to experts, keep in mind that we might come across someone who could be a good fit as a potential mentor, particularly if they are very excited about the project and give great advice. Therefore, it is important to make a good impression and build relationships.
Start with low-stake experts, and end with highest-stake: This will allow you to become comfortable with the process and improve their questions before speaking to high-stake experts. Early interviews let researchers gather basic information to prepare for meaningful discussions with top experts. Starting broad builds knowledge gradually and shows respect for VIP experts' limited time. Well-prepared researchers ask better questions and make a good impression on key experts.
Keep it natural and conversational: Ideally expert interviews should feel like natural, back-and-forth conversations. Avoid sticking rigidly to scripted questions. Instead, respond organically to what the expert says to keep things flowing. Staying relaxed and attentive rather than formal creates a comfortable environment where the expert can speak freely and openly share insights.
How to find the right experts
You can find experts to interview through various means, such as:
Proactive searches: Review authors of key publications, speakers at events, faculty doing related research, or authors of important books/reviews.
Note them as you conduct research: Take note of names that come up during the course of your research, for example an author of a paper that is the main piece of evidence behind a given intervention, or name of an advisor that keep coming up on advisory board of implementing charities in that area.
Referrals: Ask each interviewed expert for recommendations of others you should speak with in order to surface new expert referrals you may have overlooked. Peer referrals tap into first-hand knowledge of who the top experts really are.
How to contact experts?
When contacting experts remember to:
Look for connections. A warm intro is always best. Check Linkedin. If you can find a connection, do that. Always ask one expert to name other experts to talk to.
Be clear about your needs and expectations from the conversations, as well as how you identified them.
Recognize that their input is valuable.
Recording, note-taking and summarizing
To record, or not to record, that is the question: Recording expert interviews is highly beneficial for several reasons. It allows the interviewer to be fully present and focused on the discussion rather than distracted by taking extensive notes. Having the interview captured verbatim also simplifies summarizing key insights later, as the researcher can easily reference back to exact responses. Recordings support more accurate analysis compared to relying on handwritten notes or memory alone. Recording is also preferable because you will also be able to receive feedback on your interviewing skills - something that is a key skill of a good researcher. For these reasons, recording interviews with experts is an ideal practice when feasible.
Although recording the interview would be ideal, some experts will be less open if they are being recorded, particularly if it’s a sensitive topic. This can be mitigated in part by specifying that the recording will be for internal note-taking purposes only and that we will ask the expert for permission before attributing any claims to them.
However, you may ultimately decide that the best way to get the information we need is to not record the meeting and to give the expert the option of keeping certain statements off the record.
Notetaking and summarizing: Whether we record the interview or not, we recommend spending 10 minutes after the conversation summarizing the key points into a single page while they’re fresh in our minds. You may wish to send a copy to the expert so they can comment if they feel we misunderstood anything. Later, we can synthesize the summaries of each expert interview into a concise summary of the collective expert view. This can include a narrative explanation of the most commonly held views, the most controversial ones, the apparent crux of any disagreements, and a table with a rough quantification of expert opinions.
Finally, here are some questions we may ask ourselves when evaluating how a researcher used expert input.
Guiding questions to assess expert interviews
Does the list of people consulted include some relevant for understanding any identified similar case studies?
Were the interviews well conducted, with questions asked in a non-leading way?
Was the right level of trust/skepticism given to expert views?
Were questions for interviews pre-prepared, and were these the correct questions?
Do the conclusions made in the expert views section match up with the expert conversation summaries and/or transcript?
Practice project and samples in our full PDF version.