Probability Warping by Bayesians
Probability Warping by Bayesians
One of my interests has been in the area of risk, and the another is Bayesian inference. This paper jointly written by Iain Fraser (in the draft phase only) looks at the problem of how a Bayesian would interpret a probability stated by another Bayesian. It proposes a simplified model as to how probabilities would be "warped" in a way that is suggested inter alia by Prospect theory. The example illustration at the bottom is where a Bayesian is told that the probability is 0.1 (10%) but has a flexible but prior belief that the Source of the probability was uniform prior to recieving information. You can play with the model below Google colab (link at the bottom)
Example output is in the case above where one honest Bayesian repors to another tht the probability is 0.5. The "Source" is assumed to have a uniform prior by the Receiver, who also has a uniform prior. The resulting posteriors are given as below