Please review and make contributions to this paper. For now, we can edit the paper here, as a Google site. To make a change or add text, just click on the pen icon above (on the right of the screen). When you're done, be sure to save your edits by clicking the blue Save button. As the paper matures we may elect to migrate to a Google Doc, but there will always be access to it from this website. To see previous versions and compare revisions, select Revision History from the More menu (on the right above). Details that must eventually be addressed are indicated in double angle brackets <<like this>>. You can use color to highlight passages or questions for our joint review. More enduring or 'meta' commentary can be appended to this page as a comment at the bottom of the page.
Decision paradoxes and biases explained by multicameral neural processing
Paradoxes and cognitive biases are explained by competing neural calculators
Risk analysis and risk communication in an evolutionary context
What biology and evolution tell us about risk communication
Human irrationality is a misperception
Evolution has not favored only the probability calculus
Contributors (alphabetical):
Scott Ferson,
Christian Luhmann,
Jason O'Rawe,
Jack Siegrist,
W. Troy Tucker3,
Friends to consult/acknowledge: Frances White (Duke), Peter Wiedemann (Karlesrue), Donna Gresh (IBM)
As widely recognized at least since Keynes and Knight, humans treat epistemic uncertainty (i.e., lack of knowledge or epistemic uncertainty) and variability (stochasticity or aleatory uncertainty) very differently. Recent clinical and neuroimaging evidence suggests that the multicameral human brain handles decision making under uncertainty by employing multiple separate neural processors. One of these processors is devoted to risk calculations based on frequency information while another processor handles detection and processing of epistemic uncertainty. A third processor attends to fairness calculations and cheater detection, which turn out to often play a significant role in the cognition of risks and uncertainty. These processors are localized to different parts of the brain, mediated by different chemical systems, and are separately activated by sensory stimuli that format uncertainty information. When multiple processors are activated simultaneously, they can give conflicting resolutions, but the brain may prioritize considerations of one of these calculators over another. We explore the effect that these sometimes competing processors have on perception and cognition of uncertainty and suggest that several famous paradoxes in probability and decision making may arise because of the interplay between these mental processors. These phenomena include lossaversion, ambiguity aversion and the Ellsberg Paradox, hyperbolic discounting, probability distortion, the two-dimensionality of risk perception, and others. Although these phenomena are usually presumed to be biases or cognitive illusions which are manifestations of human irrationality about risk and decision making, we describe why these phenomena may actually be advantageous in humans and other species. We also place them in an evolutionary context where, rather than being failings or limitations of the human brain, they instead seem to promote evolutionary fitness. The psychological and neurological evidence suggests that the two kinds of uncertainties should not be rolled up into one mathematical concept in risk assessment, but require a two-dimensional view that respects biological realities of human decision makers and the evolutionary forces that shaped them. << characters>>
Epistemic uncertainty, which is the result of poor measurements and incomplete knowledge, is often distinguished from aleatory uncertainty, which is caused by stochasticity or variability in the world. As recognized at least since Keynes and Knight, humans treat these two kinds of uncertainty very differently. When given simple frequency information, humans appear to make risk calculations in a manner consistent with probability theory and Bayesian norms, but the presence of even small epistemic uncertainty disrupts these calculations and produces decisions that typically focus exclusively on the worst-case outcomes, ignoring the available probably information. There are, in fact, many similar cognitive “biases and heuristics” that have been described by decision scientists and psychometricians over the last several decades, which are widely considered manifestations of human irrationality about risks and decision making.
Recent clinical and neuroimaging evidence suggest that humans are endowed with at least two special-purpose uncertainty processors in the multicameral human brain. One of these processors is devoted to risk calculations while the other handles detection and processing of epistemic uncertainty. These processors are localized in different parts of the brain and are mediated by different chemical systems that are separately activated by the format of sensory input. When both processors are activated, they can give conflicting resolutions, but the brain appears to often give priority to considerations of incertitude over variability.
We explore the effect that these competing processors have on perception and cognition of uncertainty and suggest that several famous paradoxes in probability and decision making may arise because of the interplay between these mental processors. These phenomena include loss aversion, ambiguity aversion and the Ellsberg Paradox, hyperbolic discounting, the two-dimensionality of risk perception, and others. Although these phenomena are usually presumed to be biases or cognitive illusions, we describe the adaptive significance of these phenomena in humans and other species and place them in an evolutionary context where they do not appear to be failings of the human brain but rather adaptations. The psychological and neurological evidence suggests that epistemic and aleatory uncertainty should not be rolled up into one mathematical concept in risk assessment, but require distinct approaches that respect biological realities within the decision-maker.
probability paradoxes; cognitive biases; risk aversion; loss aversion; ambiguity aversion; Allais paradox; Ellsberg paradox; hyperbolic discounting; Ultimatum Game; probability neglect; cheater detection; fMRI neural imaging
Abstract <<271 words; 1904 characters>>
Decision scientists and psychometricians have described many cognitive biases over the last several decades, which are widely considered to be manifestations of human irrationality about risks and decision making. These phenomena include neglect of probability, loss aversion, probability distortion, hyperbolic discounting, ambiguity aversion and the Ellsberg Paradox, among many others. We suggest that all these and perhaps other recognised biases arise from the interplay between distinct special-purpose processors within the multicameral human brain whose existence is implied by recent clinical and neuroimaging evidence. Although these phenomena are usually presumed to be misperceptions or cognitive illusions, we describe the evolutionary significance of these phenomena in humans and other species, and we place them in their biological context where they do not appear to be failings of the human brain but rather evolutionary adaptations to life in an uncertain and risky world. Apparent paradoxes arise when psychometricians attempt to interpret human behaviors against the inappropriate norm of the theory of probability, which turns out to be an overly precise calculus of uncertainty when different mental processors give contradictory results. This view of the psychological and neurological evidence also suggests why risk communication efforts so often dramatically fail and how they might be substantially improved. For instance, it now seems clear that what risk analysts call epistemic uncertainty (i.e., lack of knowledge or ambiguity) and aleatory uncertainty (variation or stochasticity) should not be rolled up into one mathematical probabilistic concept in risk assessments, but they instead require an analysis that distinguishes them and keeps them separate in a way that respects the cognitive functions within the decision makers to whom risk communications are directed.
Alan Alda: "I've wondered about this for a long time...Is it possible that you have competing parts of the brain, each looking for dominance, and one may chose a path that is totally against the probabilities but if suppressed by other areas would make a more rational choice?"
Knight
Keynes
Incertitude = epistemic uncertainty
Arises from incomplete knowledge
Incertitude arises from
limited sample size
mensurational limits (‘measurement uncertainty’)
use of surrogate data
Reducible with empirical effort
Variability = aleatory uncertainty
Arises from natural stochasticity
Variability arises from
spatial variation
temporal fluctuations
manufacturing or genetic differences
Not reducible by empirical effort
Variability
Arises from natural stochasticity
Variability arises from
spatial variation
temporal fluctuations
manufacturing or genetic differences
Not reducible by empirical effort
Incertitude
Arises from incomplete knowledge
Incertitude arises from
limited sample size
mensurational limits (‘measurement error’)
use of surrogate data
Reducible with empirical effort
Two kinds of uncertainty
Variability
Aleatory uncertainty
Type A uncertainty
Stochasticity
Randomness
Chance
Risk
Incertitude
Epistemic uncertainty
Type B uncertainty
Ambiguity
Ignorance
Imprecision
True uncertainty
Freezing and procrastination is a common response of stimulation of the amygdala. There are several advantages of such behavior.
See https://sites.google.com/site/workinguncertain/say/procrastinationfosterscreativity
Propagating incertitude
A few grossly uncertain inputs
A lot of grossly uncertain inputs...
Traditional approach fails incertitude
Traditional probability theory doesn’t account for gross uncertainty correctly
Output precision depends strongly on the number inputs and not so much on their shapes
The more inputs, the tighter the answer and the more serious the error
Probability is hard
Probability is a very young discipline
Two centuries old, other branches of math >22
Only invented for resolving games of chance
Probability is famously counterintuitive
Monty Hall problem embarrassed prominent scholars
Even experts make egregious mistakes (e.g., Laplace)
De Morgan left probability because it was too hard
Rife with paradoxes, unlike any other branch of math
Probability paradoxes
Ellsberg paradox
St. Petersburg paradox
Two-envelopes problem
Monty Hall problem
Simpson’s paradox
Bertand paradox
Berkson’s paradox
Sleeping Beauty problem
Cognitive biases
Psychometry has documented cognitive biases in humans that make us prone to judgment errors because of the way our brains are wired
Groupthink Stereotyping
Memory flaws Optical illusions
Kahneman and Tversky reviewed many such biases in how humans perceive risks and uncertainties and make decisions
Decision biases
Ambiguity aversion (Ellsberg paradox)
Avoiding options when probabilities seem unknown
Loss aversion
Disliking a loss more fervently than liking a gain of the same magnitude
Zero-risk bias
Preferring to reduce a small risk to zero over a greater reduction in a larger risk
Anchoring
Relying too heavily on a past reference or one piece of information
Inaction aversion
Insisting on “doing something” even if the optimal decision to stand down
Risk aversion (Allais paradox)
Prefer a sure thing over a gamble with an equivalent or higher expectation
Availability heuristic
Estimating likelihood of something by the ease with which it’s remembered
Uncertainty biases
Probability misperception
Overestimating chance of rare outcomes, understating chances of common ones
Conjunction fallacy
Assuming that specific conditions are more probable than general ones
Pseudocertainty
Making risk-averse choices for positive outcomes, but risk-seeking for negative
Overconfidence
Excessive confidence in one’s own predictions
Base rate fallacy
Neglecting available statistical data in favor of particulars
Neglect of probability
Disregarding probability in decision making under uncertainty
Other biases
Clustering illusion
Seeing patterns in noise
Gambler's fallacy
Thinking future probabilities are altered by past events, e.g., P(head | 4 tails)
Framing
Drawing different conclusions based on how data are presented
Regression toward the mean
Expecting extreme performance to continue
Ludic fallacy
Believing that chance in life is like chance in games
Primacy
Weighting initial events more than subsequent events
Recency
Weighting recent events more than earlier events
Hyperbolic discounting
Strongly preferring immediate payoffs over later (more intense the closer to the present payoffs are
Heuristics
These biases are presumed to be the result of using mental shortcuts, called “heuristics”
Thus humans’ misconceptions are the results of bad wiring in our brains
And, apparently, people are especially stupid about risks and uncertainty
But how can this be?
If our biases are so bad, how is it that humans have been so successful evolutionarily?
Risks and uncertainty have likely been pretty prominent in evolution over the last 106 years
Luce noted the main finding of decision theory is that humans don’t make decisions like decision theory says they should
Neuroscience of risk perception
Instead of being divided into rational and emotional sides, the human brain has many special-purpose calculators
(Marr 1982; Barkow et al. 1992; Pinker 1997, 2002)
The format of sensory data triggers a calculator
(e.g. Cosmides & Tooby 1996; Gigerenzer 1991)
Different calculators give competing
solutions, or calculate different
components of total risk
(e.g. Glimcher & Rustichini 2004 and references therein)
Multiple calculators may fire
There are distinct calculators associated with
Probabilities and risk (variability) medical students
Ambiguity and uncertainty (incertitude) Hsu et al.
Trust and fairness (cheater detection) Ultimatum Game
Brain processes them differently
Different parts of the brain
Different chemical systems
They can give conflicting responses
Risk aversion
You’ll get $1000 if a random ball is red, or, you can just get $500 now
Which prize do you want?
Ambiguity aversion
Balls can be either red or blue
Two urns, both with 36 balls
Get $1000 if a random ball is red
Which urn do you wanna draw from?
Ellsberg Paradox
Balls can be red, blue or yellow (probs are R, B, Y )
The urn has 30 red balls and 60 other balls
Don’t know how many are blue, how many are yellow
Gamble A Gamble B
Get $1000 if draw red Get $1000 if draw blue
Gamble C Gamble D
Get $1000 if red or yellow Get $1000 if blue or yellow
Persistent paradox
People always prefer unambiguous outcomes
Doesn’t depend on your utility function or payoff
Not related to risk aversion
Not explained by probability theory, or by prospect theory
Most people prefer A to B (so are saying R.> B) but also prefer D to C (saying R < B)
Other species
Chimpanzees and bonobos preferred peanuts (which they like less than bananas) when they don’t know the probability of getting bananas
fMRI
Hsu et al. (2005) found localized regions of activity in the brain under situations of ambiguity (incertitude)
Amygdala associated with processing fear and threat
Ambiguity/incertitude detector
Humans have an incertitude processor
Triggered by situations with ambiguity
Especially focused on the worst case
Common response is procrastination
Best comments:
I'll watch this later... Posted by: theonlyhankinla | Sep 19, 2012 11:32:33 AM
I was going to write that, but I waited too long. Posted by: Critifur | Sep 19, 2012 11:58:27 AM
This video presumes procrastination is essentially a disease or a bad habit. But, in fact, procrastination can be a advantageous behavior, especially in the context of uncertainty in which danger can lurk.
The early bird gets the worm, but the second mouse gets the cheese.
The Guardian notes
Procrastination has several substantial advantages including
better outcomes
better to use all and latest info, which may not be available initially
automatically prioritizes
put off boring work until it can be displacement behavior for something even worse
avoids unnecessary work
sometimes the work doesn't need to be done after all, so doing it early is a waste; Kedrosky described a “nagging suspicion that a lot of the things that I get asked to do I don’t actually have to do.”
it's more fun and more efficient
See also essays at Fluent Time Management and Alternet.
Functional organ
Normal feature of the human brain
Not a product of learning
Visible in fMRI
Brain lesions can make people insensitive to incertitude…so they behave as Bayesians
But why pessimism?
Pessimism is often advantageous evolutionarily
Natural selection can favor pessimism
Death is ‘hard selection’
Animal foraging strategies
Programmed plant behaviors
Being wrong often has asymmetric consequences
Foraging: Finding dinner versus being dinner
Competition: Preemption versus being preempted
Collisions of the two cameras
Ambiguity aversion (Ellsberg paradox)
Probability neglect
Loss aversion
Framing effects
Hyperbolic discounting
Two-envelopes problem
Slovic’s two-dimensional plot of risks
The third fairness calculator explains even more
Human irrationality
Risk analysis from a biological perspective
Communication Krebs)
Behavior by a speaker to evoke a reaction in a hearer
Not to “share information”
Not altruistic
Reasoning (BBS paper this year)
Mechanism by which we craft arguments that will be compelling to a listener
Not to “uncover truth”
Not free from fallacy
Risk analysts are self-deluded
Risk analysts may believe themselves to be dispassionate and disinterested
But people implicitly know risk communicators are trying to get them to do something
They also know that analysts’ numbers could be wrong…or they could be lying
Risk analysts need to publically acknowledge so
Take-home messages
Humans are wired by evolution to process incertitude separately and differently from variability
This explains many “paradoxes” and “biases”
Risk analysis must (and can) distinguish them
Risk analysis should be more attuned to the biological realities of human cognition
End
Probability bounds analysis (PBA)
Convenient version of imprecise probability
Bridge between qualitative and quantitative
Sidesteps the major criticisms
Doesn’t force you to make any assumptions
Uses only whatever information is available
Distinguishes variability and incertitude
Merges interval and probabilistic analysis
Used by both Bayesians and frequentists
Probability box (p-box)
P-boxes
We can do maths with p-boxes
A = {lognormal, mean = [.05,.06], variance = [.0001,.001])
B = {min = 0, max = 0.05, mode = 0.03}
C = {sample data = 0.2, 0.5, 0.6, 0.7, 0.75, 0.8}
D = [0, 1]
Probability of A & B & C & D
Is there any evolutionary advantage to behaving in this way? Is it, in some sense, the right reaction to the problem, despite the normativity claims of the proponents of the subjective expected utility theory? Is there an evolutionary selective advantage to processing incertitude (or ambiguity, or epistemic uncertainty or just ignorance) differently than the way we process risks that arises from variability? Given that the amygdala <<>> is not vestigial in humans, this seems possible.
<<demonstration: snakes>>
<<Montauk guidance>>
<<language from the ARRA proposal>>
<<Parthenon>>
<<
1) ambiguity aversion; Ellsberg paradox
2) medical students and Bayes' rule
3) ultimatum game
4) loss aversion
5) systematicity of probability over-, understatement
6) long list of paradoxes and biases
>>
<<email Pinker>>
<<"play it safe" means to refuse to gamble when the odds are uncertain>>
<<The Hunger Games>> cooperation; even in hypercompetition, cooperation is essential
<<Looper>> hyperbolicity; see also Keith Chen's theory about language's influence on perception and thus behavior (Pinker must disapprove of this Worfian idea):
September 26, 2014
Pinker, S. (2014). Why academics stink at writing. The Chronical of Higher Education, The Chronical Review. http://chronicle.com/article/Why-Academics-Writing-Stinks/148989/
"The guiding metaphor of classic style is seeing the world. The writer can see something that the reader has not yet noticed, and he orients the reader so she can see for herself. The purpose of writing is presentation, and its motive is disinterested truth. It succeeds when it aligns language with truth, the proof of success being clarity and simplicity. The truth can be known and is not the same as the language that reveals it; prose is a window onto the world. The writer knows the truth before putting it into words; he is not using the occasion of writing to sort out what he thinks. The writer and the reader are equals: The reader can recognize the truth when she sees it, as long as she is given an unobstructed view. And the process of directing the reader’s gaze takes the form of a conversation."
This is not the real purpose of scientific writing or speaking, and lay people know it even if scientists do not.
22 February 2013 Last updated at 19:19 ET
By Tim Bowler Business reporter, BBC News
Could the language we speak skew our financial decision-making, and does the fact that you're reading this in English make you less likely than a Mandarin speaker to save for your old age?
It is a controversial theory which has been given some weight by new findings from a Yale University behavioural economist, Keith Chen.
Prof Chen says his research proves that the grammar of the language we speak affects both our finances and our health.
Bluntly, he says, if you speak English you are likely to save less for your old age, smoke more and get less exercise than if you speak a language like Mandarin, Yoruba or Malay.
Future-speak
Prof Chen divides the world's languages into two groups, depending on how they treat the concept of time.
Strong future-time reference languages (strong FTR) require their speakers to use a different tense when speaking of the future. Weak future-time reference (weak FTR) languages do not.
"If I wanted to explain to an English-speaking colleague why I can't attend a meeting later today, I could not say 'I go to a seminar', English grammar would oblige me to say 'I will go, am going, or have to go to a seminar'.
"If, on the other hand, I were speaking Mandarin, it would be quite natural for me to omit any marker of future time and say 'I go listen seminar' since the context leaves little room for misunderstanding," says Prof Chen.
Even within European languages there are clear grammatical differences in the way they treat future events, he says.
"In English you have to say 'it will rain tomorrow' while in German you can say 'morgen regnet es' - it rains tomorrow."
Disassociating the future
Speakers of languages which only use the present tense when dealing with the future are likely to save more money than those who speak languages which require the use a future tense, he argues.
So how does a mere difference in grammar cause people to save less for their retirement?
"The act of savings is fundamentally about understanding that your future self - the person you're saving for - is in some sense equivalent to your present self," Prof Chen told the BBC's Business Daily.
"If your language separates the future and the present in its grammar that seems to lead you to slightly disassociate the future from the present every time you speak.
"That effectively makes it harder for you to save."
Even more controversial, is Prof Chen's assertion that language differences underpin wider differences in people's behaviour.
In his research paper, he says that compared to speakers of languages which use a future tense, speakers of languages with no real future tense are:
Likely to have saved 39% more by the time they retire
31% more likely to save in a year
24% less likely to smoke
29% more likely to be physically active
13% less likely to be obese
Far-fetched?
Not surprisingly, Prof Chen's findings have been criticised by both economists and linguists.
They argue there are number of cultural, social, or economic reasons why different language speakers behave differently.
It is a point Prof Chen acknowledges, saying "I completely agree, it seemed far-fetched to me when I started doing this research as well."
But he says his research has controlled for all these factors, by concentrating on nine multi-lingual countries: Belgium, Burkina Faso, Ethiopia, Estonia, DR Congo, Nigeria, Malaysia, Singapore, and Switzerland.
"You can find families that live right next door to each other, have exactly the same education levels, exactly the same income and even exactly same religion.
"Yet the family that speaks a language that doesn't distinguish between the future and the present will save dramatically more," he says.
In Nigeria, for example, Hausa has multiple future tenses, while Yoruba does not.
"You can find Nigerians who speak Hausa and Yoruba who live next to each other and yet have radically different savings behaviour."
Findings challenged
But Morten Lau, director of Durham University's Centre for Behavioural Economics, says the factors which affect how much people save have little to do with language.
"In my own work with savings, it is interest rates that determine savings behaviour."
Prof Lau says there are often significant differences within language groups, and just using the average of these results in analysis can prove problematic.
"You have to be careful the inferences you make from correlations like these. It is very difficult to control for multiple factors."
"For instance, in our own research in Denmark, we found that male smokers wanted a higher interest rate on their savings than did non-smokers. But that this did not apply to women smokers."
'A tempting idea'
Linguist John McWhorter, of Columbia University, says any influence a language's structure has on the way its speakers see their world is extremely subtle.
"The extent to which the language shapes the thought is tiny. We're talking about milliseconds of reaction.
"None of it has ever been proven to have anything to do with how people see the world or experience life.
"It's a tempting idea that simply doesn't make any sense."
Also, he says, some languages have been wrongly classified, thus undermining the statistical correlations.
"Russian, and languages like it, are a lot more like Mandarin than Keith Chen thinks."
Despite his critics, Prof Chen insists his findings are robust.
"What's remarkable, is when you find correlations this strong and that survive so many aggressive sets of controls, it's actually hard to come up with a story of what else might be causing this."
So what does Prof Chen think of the idea that if he is right, then English speakers who want to start saving more for their retirement, should talking entirely in the present tense?
"It actually seems like encouraging yourself to think in the present tense makes it a little bit easier to engage in self-control."
Probability is hard
Probability is a very young discipline
Although mathematics is more than 22 centuries old, probability has been studied for only 2 or 3
Only invented for resolving games of chance
Probability is famously counterintuitive
Monty Hall problem embarrassed prominent scholars
Experts (even Laplace) make egregious mistakes
De Morgan left probability because it was too hard
Rife with paradoxes, unlike any other branch of math
Probability paradoxes
Ellsberg paradox
St. Petersburg paradox
Two-envelopes problem
Monty Hall problem
Simpson’s paradox
Bertand paradox
Berkson’s paradox
Sleeping Beauty problem
Cognitive biases
Psychometry has documented cognitive biases in humans that make us prone to judgment errors because of the way our brains are wired
Groupthink Stereotyping
Memory flaws Illusions of control
Kahneman and Tversky reviewed many such biases in how humans perceive risks and uncertainties and make decisions
Decision biases
Loss aversion
Disliking a loss more fervently than liking a gain of the same magnitude
Ambiguity aversion
Avoiding options when probabilities seem unknown
Zero-risk bias
Preferring to reduce a small risk to zero over a greater reduction in a larger risk
Anchoring
Relying too heavily on a past reference or one piece of information
Availability heuristic
Estimating likelihood of something by the ease with which it’s remembered
Uncertainty biases
Probability misperception
Overestimating chance of rare outcomes, understating chances of common ones
Conjunction fallacy
Assuming that specific conditions are more probable than general ones
Pseudocertainty
Making risk-averse choices for positive outcomes, but risk-seeking for negative
Overconfidence
Excessive confidence in one’s own predictions
Base rate fallacy
Neglecting available statistical data in favor of particulars
Neglect of probability
Disregarding probability in decision making under uncertainty
Other biases
Clustering illusion
Seeing patterns in noise
Gambler's fallacy
Thinking future probabilities are altered by past events, e.g., P(head | 4 tails)
Framing
Drawing different conclusions based on how data are presented
Regression toward the mean
Expecting extreme performance to continue
Ludic fallacy
Believing that chance in life is like chance in games
Primacy
Weighting initial events more than subsequent events
Recency
Weighting recent events more than earlier events
Hyperbolic discounting
Strongly preferring immediate payoffs over later (more intense the closer to the present payoffs are
Heuristics
These biases are presumed to be the result of using mental shortcuts, called “heuristics”
Humans’ misconceptions are the results of bad wiring in our brains
And, apparently, people are especially stupid about risks and uncertainty
How can this be?
If these biases are so bad, how is it that humans have been so successful evolutionarily?
Risks and uncertainty have likely been pretty prominent in evolution over the last 106 years
Luce noted the main finding of decision theory is that humans don’t make decisions like decision theory says they should
Neuroscience of risk perception
Instead of being divided into rational and emotional sides, the human brain has many special-purpose calculators
(Marr 1982; Barkow et al. 1992; Pinker 1997, 2002)
The format of sensory data triggers a calculator
(e.g. Cosmides & Tooby 1996; Gigerenzer 1991)
Different calculators give competing
solutions, or calculate different
components of total risk
(e.g. Glimcher & Rustichini 2004 and references therein)
List of mental calculators
Language (grammar and memorized dictionary)
Practical physics (pre-Newtonian)
Intuitive biology (animate differs from inanimate)
Intuitive engineering (tools designed for a purpose)
Spatial sense (dead reckoner and mental maps)
Number sense (1, 2, 3, many)
Probability sense (frequentist Bayes)
Uncertainty detection (procrastination)
Intuitive economics (reciprocity, trust, equity, fairness)
Intuitive psychology (theory of mind, deception)
Bayesian reasoning (poor)
12-18% correct
Bayesian reasoning (good)
A calculator must be triggered
Humans have an innate probability sense, but it is triggered by natural frequencies
This calculator kicked in for the medical students who got the question in terms of natural frequencies, and they mostly solved it
The mere presence of the percent signs in the question hobbled the other group
Multiple calculators may fire
There are distinct calculators associated with
Probabilities and risk (variability) medical students
Ambiguity and uncertainty (incertitude) Hsu et al.
Trust and fairness Ultimatum Game
Brain processes them differently
Different parts of the brain
Different chemical systems
They can give conflicting responses
Risk aversion
Suppose you can get $1000 if a randomly drawn ball is red from urn with half red and half blue balls, or you can just get $500 now
Which prize do you want?
Ambiguity aversion
Balls can be either red or blue
Two urns, both with 36 balls
Get $1000 if a randomly drawn ball is red
Which urn do you wanna draw from?
Ellsberg Paradox
Balls can be red, black or yellow (probs are R, B, Y )
A well-mixed urn has 30 red balls and 60 other balls
Don’t know how many are black, how many are yellow
Gamble A Gamble B
Get $100 if draw red Get $100 if draw black
Gamble C Gamble D
Get $100 if red or yellow Get $100 if black or yellow
Persistent paradox
Most people prefer A to B (so are saying R.> B) but also prefer D to C (saying R < B)
People always prefer unambiguous outcomes
Doesn’t depend on your utility function or payoff
Not related to risk aversion
It is clear evidence for ambiguity aversion
Not explained by probability theory, or by prospect theory
Other species
Chimpanzees and bonobos preferred peanuts (which they like less than bananas) when they don’t know the probability of getting bananas
fMRI
Hsu et al. (2005) found localized regions of activity in the brain under situations of ambiguity (incertitude)
Amygdala associated with processing fear and threat
Ambiguity/incertitude detector
Humans have an incertitude processor
Triggered by situations with ambiguity
Especially focused on the worst case
Common response is procrastination
Functional organ
Normal feature of the human brain
Not a product of learning
Visible in fMRI
Brain lesions can make people insensitive to incertitude…so they behave as Bayesians
Biological basis for Ellsberg
Probability sense and the ambiguity detector interfere with each other
Humans do not make decisions based purely on probability in such cases
Probabilists use equiprobability to model incertitude which confounds it with variability
Loss aversion
(asymmetry in perceptions about losses and gains)
Prospect theory
Prospect theory
But why?
Prospect theory is the state of the art
Purely descriptive
Doesn’t say why loss aversion should exist
What is the biological basis for loss aversion?
How could it have arisen in human evolution?
Loss aversion disappears with certainty
Loss aversion disappears with a person you trust, or after the gamble has been realized
Gilbert et al. 2004
Kermer et al. 2006
Yechiam & Ert 2007
Erev, Ert, & Yechiam 2008
Ert & Erev 2008
When there’s no doubt about the exchangeability of losses and gains, the cone of uncertainty contracts to the symmetric utility function
Direct experimental evidence
Ellsberg made the probabilities ambiguous
Psychologist Christian Luhmann (Stony Brook) made rewards ambiguous
Visually obscured the promised payoffs
“I’ll pay you between 1 and 10 bucks”
Loss aversion varies with the size of uncertainty
Disappears with certainty
Clinical evidence
Amygdala damage eliminates loss aversion
Bilateral lesions of the amygdala dramatically reduce loss aversion but do not impact an individual’s ability to gamble and respond to changing expected value and risk (n = 2)
Amygdalectomied rhesus monkeys approach stimuli that healthy monkeys avoid
But why pessimism?
Pessimism is often advantageous evolutionarily
Natural selection can favor pessimism
Death is ‘hard selection’
Animal foraging strategies
Programmed plant behaviors
Being wrong often has asymmetric consequences
Foraging: Finding dinner versus being dinner
Competition: Preemption versus being preempted
Pessimism advantageous in animals
And even in plants!
Plant pessimism
When grown together, plants make more roots
Less efficient than what they do when alone
Competition is asymmetric, first come first serve
They both grow more roots than they need just to prevent being competitively preempted
Tragedy of the commons / prisoners’ dilemma
Pessimism is not inevitable
Pessimism is not the only reaction to uncertainty
Normal people in stressful situations
Pathological gamblers
Maniacs
Ambiguity aversion decreases with optimism (Pulford 2009)
Collisions of the two camera
Ellsberg paradox
Ambiguity aversion
Loss aversion
Hyperbolic discounting
Two-envelopes problem
Slovic two-dimensional plot of risks
Human irrationality
Fairness trumps self-interest
Intuitive economics calculator
Computes fairness of situations
Detects cheaters who are getting more than their share, or not shouldering their responsibility
Ultimatum game
A scientist, some money, and two players
The scientist offers an amount of money
One of the players proposes how the money should be divided between the two players
Other player can accept (and both get their share) or reject the division (and neither gets anything)
How should you play?
Any economist would tell you that the rational optimum behavior is
Responder: always accept any deal (it’s free money!)
Proposer: always offer the smallest possible amount
This selfish behavior is the definition of rationality
utility maximization
How do people actually play?
Actual plays are much closer to fair
Proposer offers a split much closer to 50:50
Responder accepts only if the split is closer to 50:50
This pattern may be universal in human behavior
Over 100 papers in 25 western societies
15 non-western societies in 12 countries across 5 continents, including slash-and-burn horticulturalists, nomadic herders, foragers, and sedentary farmers, with stakes equal to one day’s wages
Universal in humans
The Ultimatum Game is a game whose outcome is only surprising to economists
In a variant called the Dictator Game, normal humans will share the endowment even when the responder has no say at all
Why do humans do this?
Adaptation for reciprocal altruism
Cares about fairness and reciprocity
Alters outcomes of others at a personal cost
Rewards those who act in a prosocial manner
Punishes those who act selfishly, even when punishment is costly
Mediated by the fairness calculator
Pattern absent when fairness is not an issue
Who plays the game ‘rationally’?
People playing against machines
Sociopaths
Children under five years old
Chimpanzees
Chimps and toddlers are more rational than normal adult humans
Evolutionary significance
Human sociality completely explains behavior
The fairness calculator is an evolutionary adaptation which allows humans to be social in a way that is almost unique among animals
In a way that protects against the destructive effects of self-interest, without which cheaters would prevail in evolutionary competition
“Irrationality”
Irrationality is a hallmark of human decisions
Why are humans biased, stupid, irrational?
Using the wrong mental calculator (optical illusion)
Disagreement among mental calculators
Concerned with issues outside the risk analysis
Justice
Fairness
Chance the risk analyst is lying
Chance the risk analyst is inept
Import for risk assessment
Risk analyses woefully incomplete
Neglect or misunderstand incertitude
Omit important issues and thus understate risks
Presentations use very misleading formatting
Percentages, relative frequencies, conditionals, etc.
Both problems can be fixed
By changing analysts’ behavior (not the public’s)
So how must incertitude be propagated?
Must be treated differently
Variability should be modeled as randomness with the methods of probability theory
Incertitude should be modeled as ignorance with the methods of interval analysis
Imprecise probabilities can do both at once
Incertitude is very common
Periodic observations
When did the fish in my aquarium die during the night?
Plus-or-minus measurement uncertainties
Coarse measurements, measurements from digital readouts
Non-detects and data censoring
Chemical detection limits, studies prematurely terminated
Privacy requirements
Epidemiological or medical information, census data
Theoretical constraints
Concentrations, solubilities, probabilities, survival rates
Bounding studies
Presumed or hypothetical limits in what-if calculations
Need ways to relax assumptions
Hard to say what the distribution is precisely
Non-independent, or unknown dependencies
Uncertainties that may not cancel
Possibly large uncertainties
Model uncertainty
Probability bounds analysis (PBA)
Sidesteps the major criticisms
Doesn’t force you to make any assumptions
Can use only whatever information is available
Bridges worst case and probabilistic analysis
Distinguishes variability and incertitude
Acceptable to both Bayesians and frequentists
Probability box (p-box)
Uncertain numbers
Uncertainty arithmetic
We can do math on p-boxes
When inputs are distributions, the answers conform with probability theory
When inputs are intervals, the results agree with interval (worst case) analysis
Example
Calculations
All standard mathematical operations
Arithmetic (+, , ×, ÷, ^, min, max)
Transformations (exp, ln, sin, tan, abs, sqrt, etc.)
Magnitude comparisons (<, ≤, >, ≥, )
Other operations (nonlinear ODEs, finite-element methods)
Faster than Monte Carlo
Guaranteed to bound the answer
Optimal solutions often easy to compute
Conclusions
Variability (risk) versus incertitude (ambiguity)
Academic distinction
Observational evidence for differences in decisions
Experimental evidence for different brain activities
P-box technology can handle both
Multiple camera explain “irrational” decisions
“Irrational” behavior has fitness advantages
Take-home messages
Ambiguity aversion is universal in human decision making, and is utterly incompatible with Bayesian norms
Humans are wired by evolution to process incertitude separately and differently from variability
We have a calculus that handles both
Acknowledgments
National Science Foundation
National Institutes of Health
NASA
Imprecision Fridays at Applied Biomathematics
Christian Luhmann William McGill
Nick Friedenberg Kari Sentz
Lev Ginzburg Resit Akçakaya
Rafael D’Andrea James Mickley
Jimmie Goode
End
Sure, but…
Prospect theory is descriptive but not explanatory
Description is useful, but we’d prefer to understand why these phenomena occur
We’re interested in broad human patterns
There may be cultural and social differences
There is certainly wide variation among individuals
Errors and biases compared to what?
Compared to rational choice theory
Assumes people know their preferences and can rank their options accordingly
Assumes people choose the best action according to their preferences and the constraints they face
Assumes patterns arising in groups and societies reflect choices made by individuals as they maximize their benefits and minimize their costs
Maybe rational choice theory’s wrong
Some biases are not mistakes, but adaptations when viewed in an evolutionary context
Sometimes the information just had the wrong format (equivalent to an optical illusion)
Axioms of rational choice theory
Transitivity, ordering A>B and B>C imply A>C
Monotonicity more is better
Invariance, stochastic dominance EU gives preference
Continuity p can be anything between 0 and 1
Finiteness no infinite values
Reduction, substitutability probability calculus doesn’t affect decisions
<<Adolescents’ risk-taking behavior is driven by tolerance to ambiguity>>
http://www.pnas.org/content/early/2012/09/25/1207144109
<< fairness-mediated sociality is not exclusive to humans
Kari:
I watched the whole TED talk by Frans de Waal at http://www.ted.com/talks/lang/en/frans_de_waal_do_animals_have_morals.html, which was interesting on several levels. Beyond the implications for morality, I'm most interested in what it means for the non-specialness of fairness-mediated sociality in humans. I guess we shouldn't be surprised at what they found. Anybody who's passed out treats to multiple dogs at the same time knows that some animals have a sense of fairness.
But the work with Brosnan that produced that flabbergasting video might be problematic for me. I'd taken the recent experiments that show that chimpanzees behave rationally in the Ultimatum Game to mean that normal adult humans had a more finely developed sense of fairness than chimpanzees. Just as other work showed their sense of fairness was better than that in human toddlers, and better than in human sociopaths. But the work by Brosnan and de Waal says this is naive, especially their findings in the Dictator Game and his report that in the Ultimatum Game that some chimpanzees, in some pairs, when given the grape will refuse it until the other gets one too. Those chimps certainly are not rational in the economist's sense. I guess what we're seeing is that there's variation among chimps just like there is among humans, which are after all composed of not only of normal adult humans but also toddlers, sociopaths, economists, business executives, ....
Scott
Thanks for this.
I choked on my own spit when the monkey on the left threw it back at her.
Scott
>>
This paper was written as a part of a Small Business Innovation Research grant (RC3LM010794) from the National Library of Medicine, a component of the National Institutes of Health, funded under the American Recovery and Reinvestment Act. The opinions expressed herein are those of the authors and do not reflect the positions of the funding agency.
Ermer, E., S.A. Guerin, L. Cosmides, J. Tooby, and M.B. Miller (2006). Theory of mind broad and narrow: reasoning about social exchange engages ToM areas, precautionary reasoning does not. Social Neuroscience 1: 196–.219. http://www.psych.ucsb.edu/research/cep/papers/TOMbroadnarrow.pdf
This material is not used in the paper.
language of infection: food next to rat poison in the grocery bag
Scott Ferson
Jul 31, 2012•Comments off
http://www.ncbi.nlm.nih.gov/m/pubmed/21673807/
Of black swans and tossed coins: is the description-experience gap in risky choice limited to rare events?
Authors
Ludvig EA, et al. Show all
Journal
PLoS One. 2011;6(6):e20262. Epub 2011 Jun 1.
Affiliation
Princeton Neuroscience Institute, Princeton University, Princeton, New Jersey, United States of America. eludvig@princeton.edu
Abstract
When faced with risky decisions, people tend to be risk averse for gains and risk seeking for losses (the reflection effect). Studies examining this risk-sensitive decision making, however, typically ask people directly what they would do in hypothetical choice scenarios. A recent flurry of studies has shown that when these risky decisions include rare outcomes, people make different choices for explicitly described probabilities than for experienced probabilistic outcomes. Specifically, rare outcomes are overweighted when described and underweighted when experienced. In two experiments, we examined risk-sensitive decision making when the risky option had two equally probable (50%) outcomes. For experience-based decisions, there was a reversal of the reflection effect with greater risk seeking for gains than for losses, as compared to description-based decisions. This fundamental difference in experienced and described choices cannot be explained by the weighting of rare events and suggests a separate subjective utility curve for experience.
PMID
21673807 [PubMed - indexed for MEDLINE]
PMCID
PMC3105996 Free Full Text
Free full text: Public Library of Science
Related Citations
Show all
Decisions from experience and the effect of rare events in risky choice.
Learning from other people's experience: a neuroimaging study of decisional interactive-learning.
When and why rare events are underweighted: a direct comparison of the sampling, partial feedback, full feedback and description choice paradigms.
The description-experience gap in risky choice.
The role of serotonin in nonnormative risky choice: the effects of tryptophan supplements on the "reflection effect" in healthy adult volunteers.
Biological and evolutionary context of risk analysis
About
This site collects material for a review paper(s) defining risk analysis and communication from a biological and evolutionary perspective, and explaining human cognitive biases and heuristics as features of our multicameral brain.
Acknowledgments
Support for this project was provided by the National Library of Medicine, a component of the National Institutes of Health (NIH), through a Small Business Innovation Research grant (award number RC3LM010794) to Applied Biomathematics funded under the American Recovery and Reinvestment Act.
Disclaimer
The views and opinions expressed herein are tentative and pre-decisional, and should not be considered those of any of the authors or collaborators, nor of Applied Biomathematics, the National Library of Medicine, National Institutes of Health or other sponsors.
Recent site activity
attachment from Scott Ferson
edited by Scott Ferson
attachment from Scott Ferson
edited by Scott Ferson
edited by Scott Ferson
My recent activity
attachment from Scott Ferson
edited by Scott Ferson
attachment from Scott Ferson
edited by Scott Ferson
edited by Scott Ferson