The Wikipedia page https://en.wikipedia.org/wiki/Expert_elicitation on expert elicitation (EE) is only a stub. We intend to substantially augment it. This page is a local space for us to collectively create the new article. Please augment the text below to be a collaborator. You may annotate your additions and changes with your name or make them anonymously. Names will not be used in the Wikipedia article itself.
<<definition, i.e., what EE is>>
multiple experts, optionally with the assistance of a moderator or facilitator. Experts are sometimes remunerated for their contributions or their time.
quantitative estimates of true/false; Likert; real-valued quantities; single-event or conditional probabilities; model structure
may also be used as a mechanism to track origin and provenance of estimates and the opinions that inform them, but some EE schemes deliberately anonymize contributions
Uses of EE
When relevant data do not exist
(Caroline comments):
Reasons for data to not exist: zero-failure component/system, rare events, Lack of time to collect data or/and lack of money to collect data.
In risk or reliabilityassessments in Engineering – depends on project phase (new design: lack of data; during operation: lack of time)
When data exists, but there is no model.
When opinion should trump data
Methods
Delphi
Roger
SHELF (O'Hagan)
Surveys
Crowd sourcing
Kent's words of estimative probability
Choosing experts
<<?>>
Snowballing
(Caroline comments):
Gustafson et al. (2003) have identified and selected the experts through a ‘snowball nomination process’, i.e. when an expert was nominated twice by his/her peers. When analysing the nominated experts, they have tried to balance academics and practioneers.
Avoiding biases
Aggregation
Non-commutative with propagation
Weighting experts
Measuring expert performances
Consistency and accuracy
Validation
(Caroline comments):
Having a established procedure seem to be a good practice to adopt in EE.
Journal papers seem to be more reliable when they state the procedure in the methods section if compared to journal papers that states only the probabilities obtained in EE.
Definition of validation:
Kirwan (1997) has splitted validation into internal validity (i.e. within experts) and external validity (i.e. comparing experts estimates to real or simulated data).
Good practices in validation:
Kirwan (1997) has observed some good practices in validation calling them 'experimental control requirements'. They were used for validation of expert judgement estimates.
- identical oral and written instructions, materials and environment for all participants
- sufficient time available for different techniques/assessors to make their assessments
- invigilation and other control measures to ensure each assessor/team/group has the same information upon which to base their assessments/judgements
- pilot-testing of validation trials to ensure that Human Reliablity Analysis techniques have sufficient information, and that assessor/SME fatigue does not occur
- randomisation of technique trials to prevent bias to one or more techniques
- randomisation of scenarios to prevent biasing effects
- elicitation of confidence ratings to determine whether subjects are well-calibrated or not (i.e. do they know when they are correct, and when they are probably incorrect)
- allowance and encouragement of comments if there were any other factors affecting their quantification (e.g. if they did not fully understand a task description, or felt the technique was inappropriate for that task type)
- double-blind experimental routine, so that the experimenters/invigilators do not know the true values, and so cannot knowingly or unknowingly influence assessor/SME performance
Field of research: Human Reliability Analysis.
Reference: Kirwan, B., 1997. Validation of human reliability assessment techniques: part 1—validation issues. Safety Science, 27(1), pp.25-41.
Uncertainty associated with EE
Criticism of EE
Granger Morgan's PNAS article
Burgman et al. findings
Sources of bias
Overconfidence
Exert biases magnified
Group dynamics/dominance
De-biasing (Schlyacter)
Problems with aggregation
Science of expertise
Cognitive models of expertise
Ethics considerations
Are experts a vulnerable population?
Packing the court
Conflicts of interest
Attribution, blame, provenance, and concealing opinion origins
Literature
<<References>>
Booker book
Cooke book
O'Hagan software, papers
EPA whitepaper
etc.