Guidance on the proper use of expert elicitation (EE) in science and engineering is scattered across the literatures of different disciplines, and much of it has been composed by the developers of methods or the even vendors of software tools for expert elicitation. This page is a collaboration to develop synoptic guidance that would be useful to would-be users of expert elicitation, especially users new to the technique. The "Liverpool" guidance developed here might initially be organized as a simple series of declarations, together with relevant references and examples if appropriate. It may become useful to add more structure as the text is created, such as flowcharts or other graphic elements.
Please add to or amend the guidance to be a collaborator. You may annotate your additions/changes with your name or make them anonymously. But be sure to add your name to the Team page in either case, which will be used to define authorship.
If you disagree with a claim or its motivation
Liverpool Expert Elicitation Guidance <<draft>>
1) EE always involves collecting opinions from multiple experts
2) Keeping track of the provenance of estimates, opinions and justifying arguments is a primary purpose of EE
(Caroline comments): I would not include justifying arguments. It seem to be a good procedure, but currently the justification depends on the procedure you are using.
3) Simply averaging disparate estimates from different experts is rarely appropriate
(Caroline comments):
In the special case of eliciting conditional probabilities for a Bayesian network, Mkrtchyan et al. (2016) have reviewed methods used to reduce the expert elicitation requirements in populating the Conditional Probability Tables (CPT). The methods are based on extracting model information elicited from selected distributions and extrapolating this to the whole CPT.
Field of research: Human Reliability Analysis.
Reference: Mkrtchyan, L., Podofillini, L. and Dang, V.N., 2016. Methods for building conditional probability tables of Bayesian belief networks from limited judgment: An evaluation for human reliability application. Reliability Engineering & System Safety, 151, pp.93-112.
4) Users of EE should seek to put disagreeing voices on the panel of experts
5) More reliable estimates are obtained if a procedure is established beforehand (Caroline comments)
(Caroline comments):
Having a established procedure seem to be a good practice to adopt in EE.
Journal papers seem to be more reliable when they state the procedure in the methods section if compared to journal papers that states only the probabilities obtained in EE.
Know procedures are SHELF, Delphi, (see draft Wikipedia).