Tony O'Hagan:
Can I make a plea that you don't frame the objectives in terms of eliciting just "quantitative estimates"? For me, as a statistician, it has always been important to quantify uncertainty around any estimate. My approach to elicitation, as I'm sure you know, is always to elicit complete probability distributions to quantify knowledge about an uncertain parameter. The mean or median might be natural "quantitative estimates", but the distribution comes first. Having said that, I'm not averse to eliciting an estimate provided (a) we know what that estimate means, e.g. it's a median, and (b) it comes with a quantified statement of uncertainty, e.g. a 90% interval. Anything less is in my opinion unacceptable in any scientific discourse. So please frame the problem in terms of "eliciting knowledge, in the form of a distribution or a well-defined estimate and a measure of uncertainty", or words to that effect.
Mark Burgman:
Rodrigo Estevez has been working in Chile, using expert elicitation methods to gather data as part of structured decision making in small fisheries. Nice case studies.
Anca Hanea and Victoria Hemming in Melbourne have been testing the protocols empirically. They have explored weighting experts and alternatives for aggregation.
Martine Baron at Warwick has worked with Anca to elicit priors for Bayes net applications.
Ans Vercammen at Imperial has new results on group IQ and the ability of groups to estimate facts accurately in the presence of misleading evidence.
Dominic Calleja:
According to IBM, 2.5 exabytes (that is 2.5 billion gigabytes) of data were generated every day in 2012. One could be forgiven for believing that with the Internet of Things, cheap sensors, and 'big data' that we have all the data we now need. Unfortunately having the right data is still a rarity in many disciplines. In environmental science, engineering, psychology, applied mathematics, and physics, to name but a few, there is still a reliance on experts to give their opinions or make educated guesses about the parameters of models or about the likelihood of events.
The Institute for Risk and Uncertainty of the University of Liverpool will host a two-day symposium on expert elicitation over 13-14 February 2019. We are hoping to develop some synoptic guidance for people who must address expert opinions in their quantitative risk assessments. There have recently been several prominent books on this topic from various quarters in the social, biological, and physical sciences, with rather divergent stories about best practices.
So what are the clear dicta about how we should elicit quantitative estimates from informants?
How we can evaluate expert estimates? How we can combine them and process their estimates in calculations?
The event will consist of a number of proffered talks from distinguished meta-experts on the topic, and extensive expert panel discussions including the perspective from industrial and academic practitioners.