But this does not exactly answer the question as posed by McKay: does the evidence support the hypothesis that the coin is biased?
From MacKay, "Information Theory, Inference, and Learning Algorithms," Exercise 3.15 (page 50):MacKay asks, "But do these data give evidence that the coin is biased rather than fair?"
To answer that question, we will start with coin.py, which computes a Bayesian estimate of the parameter of a (possibly) biased coin.
Download and run it, and let's discuss.
Instead of the two hypotheses we saw in Chapter 7, it uses a suite of hypotheses to represent possible values of the parameter.
Exercise: Write a function that takes a posterior distribution (as a Pmf) and computes a credible interval.
Can we formalize the hypotheses "the coin is unbiased" and "the coin is biased" and compute a likelihood ratio?
Take a look at coin2.pyLet's look at locomotive.py.
Exercise: If you started with the prior P(biased) = 0.1, what is your posterior?
These are long-term properties of using an estimator for many iterations of the estimation game.
For any particular estimate, we don't know error. If we did, we would know the answer and wouldn't need the estimator.
Which is better, an MLE or an estimator that minimizes MSE?
This is adapted from Mosteller, Fifty Challenging Problems in Probability:
"A railroad numbers its locomotives in order 1..N. One day you see a locomotive with the number 60. Estimate how many locomotives the railroad has."
Exercise: generalize locomotive.py for a set of observations (not just one).
Lecture notes >