Post date: Nov 27, 2020 2:47:10 PM
Emails starting 16 November 2020, under the subject line "Colyvan's treatment of the Cox argument that probability is the only possible model of uncertainty"
Scott Ferson wrote:
Cox's famous paper arguing that probability is the only possible model of uncertainty is still considered serious by some people, but not by any serious people. [See Colyvan's paper on the subject attached at the bottom of this page.]
Alexander Wimbush wrote
So overall I’m not liking this so far. But I am enjoying linking unanswered questions from media to this problem.
At the end of The Thing, Mac and Childs wait in shelter, unsure if either is the Thing. The ending is ambiguous, it could be either or none of them, or maybe even both. You never know, and you aren’t supposed to. Asking whether Mac is the thing has no answer, but surely that doesn’t mean that the statement ‘Mac is the Thing’ is partially true? There is evidence for truth and evidence for falseness, but no evidence for partial truth or falseness right? Just because the answer is inconclusive doesn’t mean somehow it is possible to have P or not P being something other than 1? How can you have evidence for partial truth?
I say this from a perspective of someone who has absolutely no idea what they’re on about, but I really like The Thing. Who reads Sherlock anymore?
Scott wrote:
Evidence, and where it might come from, are on one side. But the fact of the matter is the other side.
Number of rainy days depends, in a graded way, on how we define 'rainy'. The fact of the matter is vaguely [gradedly] defined, in millimeters of rainfall. That's the part that needs fuzzy membership in my opinion, although it's not the arena we've been discussing during the fuzzy Friday meetings.
Dominik Hose wrote:
Regarding 'The Thing', I agree with Alex. Regarding Holmes, by Colyvan's argument, imho the probability is zero, because Holmes does not exist and can't have done anything. I will not start putting numbers (or intervals) on the probability that some fictional character has been to some street exactly x times given that somebody is writing he has been there at least y times.
I am still having issues with this 'third kind of uncertainty' introduced by vagueness, ambiguity, etc. If I am not able to clearly state what I am making my hypothesis about, then I really should not try to put numbers on it. Isn't this, what distinguishes everyday speak from scientific speak? If you are asking me about H: 'good sprinters are tall', do not expect me to tell you that the P(H)=0.987653826 or even just P(H)=0.9. I will only tell you 'yes, from my experience this is likely'. Now, if you are asking what is the probability of H: 'a winner of the Olympics is taller than 180cm' I will tell you that, based on the data that 2/3 of the last Olympic winners were taller than 180cm, I have 50% confidence that the P(H)>0.8 (see Figure
). If I were omniscient I would even be able to tell you that it is exactly k/N (interpreting probability as the percentage, i.e. the chance to randomly select one of the k gold medalists > 180cm out of all N gold medalists.) but only in a perfect world.I guess what I am trying to say is this: If you want a numerical value for quantifying uncertainty from me, at least give me a precise hypothesis I can work with, then I'll see what I can do. If you want to specify a fuzzy membership function for what you mean by 'rainy days', I can try to work with that, too, but it has to be you giving me the membership function. I will not start guessing what you mean.
I really think that we should base probability (and possibility) distributions on our data. I do not know what a fair coin is supposed to be but I know how to count outcomes. And probability itself does not exist. Is it anything more than an idealized model (satisfying the Kolmogorov axioms) to which I can try and fit my data? And since (by its own definition) I will not be able to exactly deduce it from finite data, I have to think about how to quantify (more or less) plausible families of probability distributions, a.k.a. imprecise probabilities. This is what we all do all the time. Even the early scientists with their deficient notion of a set were working with a precise description that worked for their purposes until they found out it that with respect to some other problems, the definition needed refinement.
This was quite some rambling and I think somebody must have phrased this more elegantly (and more coherently) before me. What do you think?
Scott wrote:
But couldn't Holmes' fictionality likewise support a contrary probability of 1?
Maybe you are avoiding a larger point (not sure that Mark actually makes it) that probabilists hold that their calculus is appropriate for all pronouncements, including the ones that have no evidence either way. They have no problem at all with setting a probability for pronouncements about one-off events such as whether OJ actually killed those people, or hypothetical events involving as fictional characters, or even ethereal concerns such as whether God exists. (Talk about being non-scientific!)
In his paper about his squizzles, Roger Cooke, like you, protests that we have to know what all the words mean, so pronouncements about whether the slithy toves did or did not gyre and gimble in the wabe might fail in the clairvoyant test (clarity test). But words can have meanings despite there being dispute about them. And just because something is vague in the philosophical sense does not mean it is not clear. The number of rainy days this last October in Liverpool is perfectly clear, but it depends on arbitrarily specifying the number of millimeters of precipitation in the definition of 'rainy'. We talk about uncertainty of the third kind only to admit that it is absurd to say, as for instance the India Meteorological Department does, that a day with 2.51 mm of rain is a rainy day but one with 2.49 mm of rain is not. Using fuzzy structures, one could handle a range of definitions all at once and project the implications of that arbitrariness through subsequent calculations. It's not a matter of having to guess the definition, it's a matter of being clear about the vagueness (borderline-ness).
Lotfi may not have been a very nice man, but his idea to bring those non-scientific statements into the realm of science, by accounting for the arbitrariness, is a good one. He was wrong about a lot of things, but this idea seems reasonable to me, and it might be important. For example, the definition of an endangered species is set internationally by the IUCN criteria, but they are necessarily arbitrary. Species that are near--but technically not--endangered are not eligible for certain legal protections and conservation funding. It can therefore make sense for a motivated conservationist to go out and kill a few animals if doing so could tip the species over the line to reap those advantages. Such tyrannies of arbitrary reactions to vagueness are serious problems in many areas. I confess I've not seen a lot of applications of fuzzy methods successfully used to practically resolve or handle the arbitrariness, but I don't read in this area anymore so I might be missing them.
Now you're starting to talk crazy. What do you mean you don't know what a fair coin is supposed to be? I think you know. Kolmogorov never said probability doesn't exist, quite the opposite. Di Finetti said that craziness, and his narrowness on the subject was to say that he only wants to talk about subjective probability. As if flippable coins don't exist!
I do agree with you about this: "And since (by its own definition) I will not be able to exactly deduce it from finite data, I have to think about how to quantify (more or less) plausible families of probability distributions, a.k.a. imprecise probabilities."
Bring me Schrodinger's head!
Reddit user pyzuhtu added "I want him dead AND alive! "