Tonight on Charlie Rose, they did another in their series on the human brain with Eric Kandel, Daniel Kahneman, Michael Shadlen, Walter Mischel, and Alan Alda. You prolly have to watch it. They talked about imaging the neurons that collect statistical evidence. I almost peed myself when this exchange occurred:
Alan Alda: "I've wondered about this for a long time...Is it possible that you have competing parts of the brain, each looking for dominance, and one may chose a path that is total against the probabilities but if suppressed by other areas would make a more rational choice?"
Daniel Kahneman: "What really happens is, you know, you would have that at the level of a single neuron when the evidence is ambiguous. And so there is going to be some randomness in the behavior of the neuron, and in the behavior of the monkey."
I'm excited because Danny doesn't even entertain the notion of a multicameral mind.
Scott
Scott,
Thanks, I hope that they go well. Is EBD a synonym for EBC?
Interesting excerpt. I will read the PDF in more detail. I am not really sure what either you(Scott) or Jack are talking about. Assuming the data are ambiguous(which, as presented, I would say that they are - two estimates of the exact same thing, each estimate having equal validity/credibility), how is ignoring the data a worst-case assessment? If the interval "15-30%" was given, 15% should be considered the worst case among the available data, right? So, if the observations were just due to risk aversion, then the interval 15-30% should evoke the same decisions as the 15% value, no? I say this because I am skeptical of people calculating and assessing averages(not to mention confidence intervals) before making casual decisions, but that is just my own speculation/opinion. Maybe there is some sort of visceral, and reflexive, estimation of averages, but I am not at all sure about that.
0% was not a number that was presented as any response to any survey. Saying nothing about a survey is probably different than saying "0% of restaurant goers in the downtown area expressed a strong interest in Chinese food". Wouldn't you guys agree?
This is just a preliminary thought or two, I am probably missing something important or am just plain dumb, I will read in more detail.
Jason
****************************************************************************************************************
On Wed, Feb 27, 2013 at 1:12 AM, SandP8 <sandp8@gmail.com> wrote:
Jason:
By the way, break a leg on your interviews.
The RFF report Not a Sure Thing (Krupnick 2006) has a lot of interesting stuff in it, and your EBD review should definitely draw from it. I think you should steal some time to work on EBD.
One thing that is critical for us, and for the p-boxy approach generally, is the work of van Dijk and Zeelenberg (2003) about how contaminating a decision under risk with a little bit of epistemic uncertainty causes people to abandon their risk (i.e., probabilistic) assessments and resort to a worst-case assessment. These studies were discussed at the bottom of page 174 in Krupnick et al., Not a Sure Thing: Making Regulatory Choices under Uncertainty (http://www.rff.org/rff/Documents/RFF-Rpt-RegulatoryChoices.pdf):
Van Dijk and Zeelenberg (2003) report the results of three experiments that appear to confirm this hypothesis. In one experiment, they asked a control group of respondents if they would be willing to invest in a new Chinese restaurant in a downtown area if they knew that a rival Chinese restaurant would soon be opening in the area. To a second group, they presented the same scenario plus the additional information that a recent survey had shown that 15% of restaurant goers in the downtown area expressed a strong interest in Chinese food. A similar scenario was presented to a third group, except the percentage of restaurant goers in the area expressing a strong interest in Chinese food was raised to 35%. As expected, individuals provided with the 15% number were more willing to invest in the venture than the individuals provide with no extra information, and the individuals told that it was 35% were more likely to invest than the two other groups. However, the researchers told a fourth group that two surveys had been conducted and provided both numbers (15% and 35%). The willingness to invest of this fourth group was less than that of the second and third groups and in fact was not statistically different from that of the control group, which had received no survey results. The two other experiments produced similar results, leading the researchers to conclude that individuals tend to discount ambiguous information, meaning that it does not alter the decisionmaking process.
The reasons for this effect are not clear. One possibility is that choosing under conditions of ambiguity is cognitively demanding. Another possibility is that people may be concerned that their decisions will be subsequently evaluated and have a difficult time providing a rationale for their choices under conditions of ambiguity (Curley et al. 1986).
van Dijk, E., and M. Zeelenberg. 2003. The discounting of ambiguous information in economic decision making. Journal of Behavioral Decisionmaking 16: 341–352.
Of course, I think the reasons for the effect described by van Dijk and Zeelenberg actually are perfectly clear, and arise from the conflict of the two mental processors.
Scott
****************************************************************************************************************
That is exactly what I think. If they used 85% and 65% (95% CI of 55.4 to 94.6) then the probability of investment would have been significantly higher than with no information about potential customers, and lower than for each percentage alone. If you ask people whether they would rather start their new restaurant in neighborhood 1, where there is no information about how many people like Chinese food, or neighborhood 2, where one consultant says 65% like Chinese food and another consultant says 85% like Chinese food, everyone is going to pick neighborhood 2.
I don't doubt at all the validity of the modular brain model, I just don't think that this particular experiment has anything directly to do with ambiguity. Plus I don't even know how you would easily get ambiguity into a situation like this.
****************************************************************************************************************
On 2/27/2013 10:20 AM, SandP8 wrote:
J:
Part of what you are saying is crazy talk, but part of it is reasonable.
I suppose it doesn't really matter what we call it. At some level, all uncertainty is epistemic, even aleatory uncertainty is epistemic (with the extra partial knowledge that you're drawing from some fixed distribution). At this level, I think you're right that people just don't like uncertainty at all. And the separating the two kinds is like dividing ice from snow, as we've said a lot. That's all surely true.
But you still need to explain the observation that the risk calculator seems to get countermanded when the amygdala gets tickled. It's the discontinuity that's surprising I think. And that the reversion is to no probability. You can redesign the experiment--and maybe you should--so that, even with your suggested confidence interval on the measurements, one would not get all the way to zero. Then see what happens. I'll bet you dollars to doughnuts that you'll still see a reversion all the way to zero...that is, to the worst-case analysis. It is the "fuck it!" response which is soooo common in humans. Now, maybe it's just that I've got a theory looking for data, but I think the dual processing does fit this scene pretty well.
And, by the way, I'll also bet that people cannot come up with confidence intervals in their heads. People are famously bad at doing this, even for things they know about. Doesn't the Plous paper say this?
Dang. We should put all this, including my email to Jason, your not-buying response, this rejoinder, and anything else you say now, on the appropriate Say page. Can you please do this?
Sc
****************************************************************************************************************
On Wed, Feb 27, 2013 at 10:33 AM, Jack Siegrist <jack@ramas.com> wrote:
Why isn't what they saw just risk aversion? I know that they want to treat each possible percentage as a probability of success, therefore the percentage already represents uncertainty, so that two percentages must mean uncertainty about uncertainty and therefore epistemic uncertainty, but I don't buy it.
The two percentages in the normal world would just represent two separate measurements. If you treat the two percentages as two observations then a 95% confidence interval for the mean is 5.4% to 44.6%. The low end is nearly zero, so if I were risk averse I might focus on the low end of this interval and make the same decision as the control group.
I am betting that people can automatically come up with something close to a 95% confidence interval in their head, even if they cannot numerically specify it.
Jack
****************************************************************************************************************
On 2/26/2013 11:59 PM, SandP8 wrote:
I found the document, the relevant text, and the references it cites. When I, well known space cadet, ask you this same question in the future, please tell me the answer is this:
Studies by Van Dijk and Zeelenberg (2003) mentioned at the bottom of page 174 in Krupnick et al., Not a Sure Thing: Making Regulatory Choices under Uncertainty (http://www.rff.org/rff/Documents/RFF-Rpt-RegulatoryChoices.pdf)
Van Dijk and Zeelenberg (2003) report the results of three experiments that appear to confirm this hypothesis. In one experiment, they asked a control group of respondents if they would be willing to invest in a new Chinese restaurant in a downtown area if they knew that a rival Chinese restaurant would soon be opening in the area. To a second group, they presented the same scenario plus the additional information that a recent survey had shown that 15% of restaurant goers in the downtown area expressed a strong interest in Chinese food. A similar scenario was presented to a third group, except the percentage of restaurant goers in the area expressing a strong interest in Chinese food was raised to 35%. As expected, individuals provided with the 15% number were more willing to invest in the venture than the individuals provide with no extra information, and the individuals told that it was 35% were more likely to invest than the two other groups. However, the researchers told a fourth group that two surveys had been conducted and provided both numbers (15% and 35%). The willingness to invest of this fourth group was less than that of the second and third groups and in fact was not statistically different from that of the control group, which had received no survey results. The two other experiments produced similar results, leading the researchers to conclude that individuals tend to discount ambiguous information, meaning that it does not alter the decisionmaking process.
The reasons for this effect are not clear. One possibility is that choosing under conditions of ambiguity is cognitively demanding. Another possibility is that people may be concerned that their decisions will be subsequently evaluated and have a difficult time providing a rationale for their choices under conditions of ambiguity (Curley et al. 1986).